In one of my classes the approach was the opposite, I’m expected to do Ph.D level work as an undergrad and am expected to use AI.
In a different one she just said so long as you say AI was used you’re fine to use it.
In the rest of them AI is considered cheating.
To say we have discrepancies in the rules in an understatement. No one seems to have the exact answer on how to do it. I personally feel like expecting Ph.D level work is the best method as of now, I’ve learned more by using AI to do things about my head than hard core studying for a semester.
Things like this are well-intentioned but idk why there aren't more teachers creating optional "side quests" like these for students that want them instead of forcing them to do things like these
optional "side quests" would allow teachers to create some standard accepted "main quest" curriculum and then just create a bunch of (even possibly "fun") "side quests" students can work on in their spare time for extra skill development
I spoke with a bunch of profs about how they were assessing students in the age of AI:
What's interesting is that as I understand, folks are using things like Google Docs for papers, and that it's (apparently) straight forward to do analysis on a Google Doc to see, well, the life of the document. How it was typed in, how fast, what was pasted and cut back out.
My understanding is that the Google Doc is not a word processing document, it's an event recording of a word processor. So, in theory, you could just "play back" watching the document being typed in and built to "see" how it was done.
I only mention this because given the AIs, I'm sure even with a typewriter, it's more efficient to have the AI do the work, and then just "type it in" to the typewriter, which kind of invalidates the entire purpose of it in the first place.
The typing in part is inevitable. May as well have a "perfect first draft" to type it in from in the first place.
And we won't mention the old retro interfaces that let you plug in a IBM Selectric as a printer for your computer. (My favorite was a bunch of solenoids mounted above the keys -- functional, but, boy, what a hack.)
TaaS -- Typing as a service. Send us your Markdown file and receive a typed up, double spaced copy via express shipping the next day!
Why are people promoting the idea that exams are not written or given in person anymore? I graduated relatively recently and maybe had 1 take home exam during my entire education. Every other exam was proctored in person and written. The professor who made the take home exam also made it much more difficult than a normal exam so I would not really say it was easier than a normal in person test.
I used to make my classes 60-80% project work, 40-80% quizzes all online.
I now do 50% project work, 50% in person quizzes, pencil on paper on page of notes.
I'm increasingly going to paper-driven workflows as well, becoming an expert with the department printer, printing computer science papers for students to read and annotate in class, etc.
Ironically, the traditional bureaucratic lag in university might actually help: we still have a lot of infrastructure for this sort of thing, and university degrees may actually signal competence-beyond-ai-prompting in the future.
We'll see.
I like this. Related, this semester I've been using handwritten quizzes in class. A simple change that's been one of the best things as it changed students' expectations of class prep. Kind of do the readings and sort of prep and you can coast in class. But if you need to write out quiz answers you're forced to know the material better as well as maintain the ability to express yourself.
I also use low-point bonus questions to test general knowledge (huge variation on subjects I thought everyone knew).
Reading all these comments, I feel like US universities are a joke.
I had to do all the exams in person. 100% of the grade was decided at the exam. Millions of people graduated this way and they are fine. No students were harmed in the process.
My school couldn't afford typewriters in the 1980's and early 1990's.
We wrote assignments by hand using a pencil or pen.
Is that really complicated?
When I got to college and everything had to be typed I still wrote everything by hand on paper and edited with an eraser and a red pen to reorganize some sentences or paragraphs. Then I would go to the computer lab and type it in and print it out.
When I was in college, your grade fully depended on the oral exam/debate with the professor. Everything else was but the entry ticket.
Not sure anyone even attempted to cheat in that scenario. And the conversations were usually great, although very stressful for us cramming types
Uni professor here.
My colleagues that teach hard skills courses (like data structures and algorithms) either love AI and incorporate it into their teaching at every moment possible, or despise it in the same way graphing calculators were by high school math teachers when they were introduced nearly 30 years ago.
I teach soft skills classes to engineering students, and I'm unconcerned with students using AI. I write my problems in a way such that, if the student truly understands the assignment, prompting the AI to solve the problem and iterating on it takes a similar amount of time to doing the work themselves. AI is not very good at writing introspectively about the student. In other words, AI isn't going to be helpful when the homework question is "A fellow student comes to you asking for suggestions on how to maximize their chances at landing an internship. What advice do you give them that's immediately actionable?"
Try it, plug that into ChatGPT or your favorite LLM. It parrots the same generic tips everyone tells you, with very little on "how" do perform the action in an effective way. Read it, copy it into your advice document, get a poor grade. Try telling other students to take this advice. Note how they don't because the advice isn't actually actionable enough for them to take action.
LLMs are also not very good at the follow-up question "In a previous assignment you gave specific and actionable advice to a peer on the job search. Which of these suggestions were so good you are now doing them?" A number of students write a "Mental Gymnastics" essay, claiming they are following all their suggestions (because they think that's what the professor wants to hear) while the evidence they provide demonstrates they are not. A student asking an LLM to write the essay for them consistently produces a digital 'pat on the back'; a mental gymnastics essay that ultimately makes the student realize how unwilling they are to solve the #1 problem in their college career.
I've done away with exams wherever possible. I stick to project-heavy courses. What I've found to be far more concerning than AI use is the increasing loss of social skills and ability to cooperate within the younger generations. The number of students who would prefer to fail a class instead of talk to literally any human being is astounding.
The number of students who refuse to build soft skills, and believe that tech is truly a meritocracy where the only thing that matters is 'lines of code', there's no politics, and they won't work call or crunch or give code reviews, is also astounding.
I think if your university doesn't do in person exams with pen and paper then the degrees it hands out are not much evidence of anything.
If you're not interested in learning the course content, then what are you doing there? Pretty expensive waste of time.
I very fondly recall many of the course I did at university. The exams were a helpful motivating factor even for the interesting courses.
A hand-written essay in class would seem to be a workable mechanism for a student to demonstrate an ability to reason on their own about a subject.
One of my best college professors would review such essays in-person, one-on-one twice each semester.
A typewriter tty would be a fun weekend project.
Better dust off that old AlphaSmart!
... and the price of daisywheel printers is skyrocketing. https://en.wikipedia.org/wiki/Daisy_wheel_printing
If students cheat they hurt only themselves. Make sure they understand the consequences for cheating (missing out on learning) and that's about all you can do.
I had a typewriter growing up and I remember thinking it was the coolest thing. I was amazed by it and tried writing several stories. Eventually my dad bought me a crappy old computer that was only really good for writing, and that was cool too. I loved that thing. It was small too, with an integrated monitor and keyboard, so it didn't take over the whole desk where I still used pencil and paper often
Imagine being able to do some writing without notifications going off every few seconds, and where you're not always one click away from a search engine and some website scientifically designed to drag your attention down a rabbit hole and keep it there
i mean, you can just have AI still do the work, you’re just doing data entry with a type writer.
If AI can do the work, maybe the test should be more focused on what AI can’t do? This is like anyone still doing a traditional coding interview with leetcode problems just because they haven’t yet done the work to figure out what to test for in a world where Claude Code exists.
One consequence of LLM fraud at scale making remote/online tests & document submission worthless is it might act as a giant revitalizing boost for the bricks-and-mortars school systems. Suddenly having real teachers and students in room together has value again, for credibility and authenticity alone.
LLMs are also making having a public repo code portfolio be much more worthless as a sign of legitimacy
I’m confused about too many things being measured at once. Is Phelps banning AI to ensure her students are fit to pass terminal examination? And doing so to ensure that her class has a good pass rate, proving she is a good teacher and can keep her job? What if her cohort are particularly dumb? Is she incentivized to make it easy to pass her classes to get that A you paid so much for? Or hard or make that A worth something?
My mentor, a PhD in classics, told me it was never about outcomes and only about improvement. I suppose that answers my question. If your AI gets you an A at the start of the course and an A at the end, then, in the sense that you have not succeeded over anything, you have failed.
I like open note exams (and perhaps open book exams, as you need to know the book well to know which page to look at) - it forces you to condense the material to the salient points and operationalise it to solve what would be more challenging problems than a simple recall exam.
When I see 'cheat sheets' - designed to be hidden on the back of calculators or whatever - then I see true application of human ingenuity and intellect.
This will only work until somebody figures out how to connect an AI to the typewriter which will have some sort of MIC, and the person will start dictating into it with AI-assisted revisions. Once the dictation is over, the AI-enabled typewriter will be instructed to type the work out.
Testing and instruction should be modified to account for AI. If a student uses an Agentic AI for work, learning, research, then when test time comes, the student should be required to stand in the front of the class and teach the class what they have learned, i.e. "Teach Back" all they learned to the entire class student body and teacher. The entire class, instructor included, will also be required to participate in a Q&A session to make sure that student's learning is not just made up of memorization, e.g. restate the information learned but using different words, different scenarios, etc.
Just have them write it out. “Ain’t nobody got a goddamn typewriter”.
... meanwhile, all these students graduate, can't find jobs and become plumbers or bricklayers.
Pfft, just grab a teletype and run lpr -P ttyUSB0 ai_generated_report.txt ;-)
[dead]
The college instructor might as well ban calculators and use abacuses then.
Might be an unpopular opinion in this thread, but college was made worthless for most degrees as soon as the internet got popular and silly performative shit like this is the death knell. College is about learning how to work in an industry. I'd predict an uptick in trade schools and other hands-on work like medicine, and a continuing downturn in so-called formal education for anything white-collar, programming included. Students are customers. Businesses are going to use AI going forward. No reason to waste time on this.
When I did my Computer Science degree the vast majority of courses were 50% final, 30% midterm - even programming exams were hand written, proctored by TAs in class or in the gymnasium - assignments/labs/projects were a small part of your grade but if you didn’t do them the likelihood you’d pass the term exams was pretty darn low.
We already had AI proof education.