logoalt Hacker News

Training students to prove they're not robots is pushing them to use more AI

131 pointsby PretzelFischtoday at 7:01 PM135 commentsview on HN

Comments

kayo_20211030today at 8:38 PM

Perhaps we should not grade students on weekly, or other occasional, writing during the term or semester.

How about going back to the old system where, apart from experimental lab work, nothing is graded until the end of the term?

All weekly assignments should just be considered prep for one exam at the end of the term where the student has an opportunity to demonstrate mastery of the course's subject matter. They can prepare as they wish, use AI, and even cheat on the homework, but there will be a revelation at the end of the term.

That final test can be proctored, monitored, audited to ensure that whatever words are used are indeed the student's own words. The resulting grade depends on that, and that alone.

The approach of continuous assessment, which to me always seemed suspect and ripe for abuse, was completely broken by the AI tools that are now available.

show 13 replies
Zigurdtoday at 9:02 PM

The core of the problem the article is about isn't AI or LLMs, it's about scam software that claims to catch cheating. It's crap for the same reasons that crime predictions software is crap. It's selling a panacea, and that kind of product inherently attracts scammers.

If your school uses software to detect AI writing, that's a problem with the quality of your school. The people choosing that software are too stupid to be running a school. The software isn't going to get any better.

show 2 replies
Buttons840today at 8:21 PM

Tangent:

I've noticed I write a lot different because of combative online arguments. I have a problem.

So much of my communication is directed to people who don't want to hear me or understand me. So I've become very punchy and repetitive, trying to hammer home ideas that people are either unable or unwilling to understand.

I need to find ways to talk to people who want to hear and understand me.

It's hard to find other people who actually want to hear and understand though. People have different interests, and even when people appear to be working towards the same goal, they often aren't; like a boss who just won't understand the bad news, because it's easier to ignore the problem.

show 7 replies
zahlmantoday at 8:22 PM

I object to the idea that the LLM writing that these students are trying to distinguish themselves from, is actually good in the first place. Although students might well end up writing worse because people are trusting the detection of LLM content to other LLMs. (And really, it's bizarre that these massively complex systems required to produce roughly human-like output, apparently offer such simplistic reasoning for what they detect as non-human.)

Honestly, I lean towards shaming educators who do that. If you can't detect the whiff of LLM with your own senses, then it has been used properly and shouldn't be faulted. If that premise invalidates your assignment, change the assignment. It's not as if you're assigning this work to test the basic mechanics of writing (grammar, sentence/paragraph structure, parallelism, whatever) — I mean, how much of that did you consciously try to teach? My recollection is, not an awful lot; and I can only imagine it's gotten worse since I was in K-12 (and I went to pretty darn good K-12).

show 2 replies
delichontoday at 9:48 PM

I was exploring ai porn, for science, and noticed another perverse incentive. I tried to prompt for a naked man and woman standing next to a pool, but could not get it to generate that image. Instead it insisted that the two characters must be having enthusiastic penetrative sex. A dozen prompts could not escape that strange attractor of porn.

It turns out to be built into the training data. The diffusion model just doesn't have many references of naked people not embedded in porn tropes, so it autocompletes porn.

Online moderation of generated images have the same weird incentive. Since real people seldom film themselves having sex, a naked person not having sex is a red flag for a possible real person, and gets moderated more strongly.

So in the new world, well written sentences are a handicap and nudity is generally accompanied by an exchange of fluids.

Paracompacttoday at 7:44 PM

Grade school has never been kind to genuine writers. It reminds me of SAT essays that favored formulaic writing, because guess what: the grading criteria were formulaic!

I think grading in general can be stymying for students' motivation and creative drives.

show 2 replies
softwaredougtoday at 8:22 PM

Maybe I’m less worried. Teachers seem to have adopted.

In my experience educators no longer use AI detectors given the risk of false positives. But some work is obviously lazy AI content. When that happens, educators talk to the student to see if they understand what they wrote.

Teachers cope with more in person writing, oral presentations, defense of what’s been written.

If you think out it the pre-AI computing generation is itself anomalous for having ubiquitous access to efficient human-only writing tools. We probably wrote more than previous generations. Early Internet / blogging culture bears this out.

show 1 reply
teo_zerotoday at 8:20 PM

One of the skills teachers have always demonstrated, is to be able to detect when students copy. This has never pushed students to artificially add mistakes to their essays.

If now teachers abdicate this judgment to a software, students should be allowed to abdicate their duties to a computer as well.

tl2dotoday at 9:49 PM

Training students to write a single theme in multiple styles—including intentionally "bad" writing—is "originally" a great educational method. It teaches real composition by helping students understand what works and what doesn't. It builds good criteria in students.

But, the article's focus on writing "worse" for AI detectors misses what is important. Trying to distinguish humans from machines does not develop student capability. In fact, it's a fleeting technique because AI writing styles will vary and improve over time.

jccctoday at 9:37 PM

We’re also training young people to get used to being surveilled by automated black-box tools, and to accept serious real-world consequences from their judgements.

These kinds of things are novel to us and deserving skepticism, but become just the world we live in to them.

etempletontoday at 8:52 PM

When I was in high school I was a better writer when I had time (versus in class) and generally a better writer than I was a student. The net result was fairly often being accused of plagiarism. Not because the teacher had proof(I never plagiarized), but because the teacher couldn’t believe I could write to the level I sometimes wrote at on take home assignments. Admittedly, I was a wildly inconsistent student.

This reminds me a bit of that. AI writing is—in many ways—objectively very good, but that doesn’t matter if no one thinks you wrote it. AI writing is boring exactly because it is consistent and like any art form people want to see something original.

jupp0rtoday at 9:07 PM

Sounds like a great opportunity for kids in high school to learn how to feed back the AI detection results into the model and have this process be automated. Next level would be fine tuning the model via reinforcement learning and sharing it with your friends via Hugging Face.

carcabobtoday at 8:29 PM

A few times in some Discord communities, I've been accused of being A.I. because of how I write. Kind of sad and a bit annoying. I also quite like em dashes, but have felt the need to reduce how much I use them.

Glad to see some schools and teachers teach how to use them well, rather than ban them outright.

show 1 reply
themafiatoday at 7:47 PM

If you're just going to use software to judge the output of students then why don't we all just keep them at home? I have a computer at home and it seems like everyone from the teachers to the school board have just abdicated their responsibility. This doesn't sound like a system that needs to be maintained.

show 1 reply
tliltocatltoday at 9:37 PM

Define "worse". I absolutely hated this formal essay style even before LLMs were a thing. All these "on the other side", "in conclusion" patterns with loads of generics of doesn't convey anything useful. And they make it really hard to tell if the writer is pretending to know anything or actually knows their shit but don't know how to write so that doesn't sound like an essay assignment. Good riddance.

On a side note: the fixed-pattern essay thing seems to be an American invention, or at least popularized by the American education system.

withtoday at 9:30 PM

nobody's asking who profits from false positives. these AI detection vendors have a direct financial incentive to flag aggressively. more flags = "more value" = more school contracts renewed. same playbook as selling antivirus to your grandma. sell fear, charge per seat, and make the false positive rate someone else's problem.

show 1 reply
Someone1234today at 7:55 PM

I've started do this on social media. I got "called out" after using big words or using a - in a sentence. So now I write less good on purpose, so whatever I commented doesn't get drawn into a sidetrack off-topic witch-hunt.

As soon as someone yells "witch" you cannot disprove you're not one, and I've even had people put my handwritten comments through "AI detector" websites that "proved" they were AI (they weren't). It literally just highlighted two popular English phases.

LLMs were trained on sites like HN and Reddit, so now if you write like a HN or Reddit commentator, you sound like AI...

show 7 replies
j45today at 8:05 PM

The more students read, and the more variety they read, the better they will write.

This will likely be valuable for AI skills too.

show 1 reply
theptiptoday at 8:17 PM

This is what terrifies me about the public school system. A revolution has occurred, but it’s unevenly distributed.

The schools simply don’t have the flexibility, agility, or frankly it seems motivation to adapt to what has already happened.

The ship has sailed; essay writing is no longer a viable form of assessment.

The idea to try to build a reliable AI detector is asinine, and fundamentally misunderstands how any of this works now, let alone the very obvious trend-lines.

Stop with the lazy half-baked solutions, get your head out of the sand, rethink the whole curriculum. This is an emergency, we needed to be urgently attending to this years ago.

show 3 replies
throw73838today at 8:29 PM

> The assignment had been to write an essay about Kurt Vonnegut’s Harrison Bergeron—a story about a dystopian society that enforces “equality” by handicapping anyone who excel

Did not this self censorship process started decades ago? There are certain answers expected in academia, arguing for anything else would get you in troubles. Not using “devoid” seems pretty minor inconvenience.

For me biggest wtf is why students are still expected to write graded essays, and to keep this make believe it is somehow useful and applicable skill.

show 2 replies
jmyeettoday at 8:26 PM

The profit motive is corrupting and polluting every level of the education space.

Teachers are being hamstrung on curriculum. The districts enter into contracts that require the use of certain programs for certain amounts of time. We've known for decades (if not a century) that direct instruction works [1] but you can't sell devices, platforms and consulting services that way.

We're literally at the point in education we were in the 1950s when the health benefits of nicotine in your Q zone were lighting up the airwaves.

And generative AI means it's all but impossible to have take home writing assignments. But hey this is another opportunity to sell AI or cheating detection software, that's often just an em-dash detection [2].

We have a generation that gets to college quite possibly having never written a book. social promotion through grades and the constant distraction of electronic devices in classroom settings. I don't even necessarily blame the parents entirely either because we've constructed a society where 2 people need 5 jobs to make ends meet.

And while all this is going on we have a coordinated and well-funded effort to defund public education and move government funds to private schools based on the failing public education that's failing because we defunded it. This is usually backed up by some baloney study that shows charter shcool produce better results that really comes down to charter schools being able to be selective with enrolments while public schools cannot be. Plus we mingle in special education kids into public education because those programs got defunded too.

And really that's just a bunch of already affluent people who want a tax break for doing somethign they were going to do anyway: send their kids to private schools so they don't have to mingle with the poors and aren't taught inconvenient things like human reproduction, critical thinking and self-determination.

And after all of that we just end up teaching kids how to pass standardized tests.

[1]: https://marginalrevolution.com/marginalrevolution/2018/02/di...

[2]: https://medium.com/@brentcsutoras/the-em-dash-dilemma-how-a-...

botbotfromuktoday at 9:13 PM

[dead]

noemittoday at 7:52 PM

What would assessment look like if we started from "how do humans actually learn and communicate" rather than "how do we catch cheaters"?

show 1 reply
dawatchusaytoday at 8:13 PM

Did they not even test their AI detection tool to verify that it can detect when something is human written? That should have been exactly as important as the opposite. Maybe a tool that checked that would be equally as ineffective and we’d move on from the subject entirely

show 1 reply