logoalt Hacker News

ipcress_fileyesterday at 9:50 PM1 replyview on HN

Do you have any evidence to back this up or is it speculative?

My institution subscribes to TurnItIn's AI detector. The documentation is quite clear that the system is tuned in a manner that produces a significant number of false negatives and minimizes false positives. They also state that they don't report anything under "20% AI-generated" content.

So the marketing I've seen is intended to reassure skittish administrators that the software is not going to generate false accusations.

That being said, I have no idea whether the marketing claims are true. The software is a black box.


Replies

withyesterday at 11:01 PM

Fair point, the "tuned to flag aggressively" claim was speculative on my part. Turnitin's own documentation says they favor false negatives over false positives.

That said, their accuracy claims have been disputed before. Inside Higher Ed [1] reported that Turnitin's real-world false positive rate was higher than originally asserted, and the company declined to disclose the updated number. And, USD also noted that while Turnitin claimed <1% false positives, a Washington Post investigation found a 50% rate on a smaller sample, and that non-native English speakers / neurodivergent students get flagged at higher rates [2].

Now, those are from 2023 and the product (and AI in general) has been updated drastically since. But the broader incentive problem holds even if the detector itself is conservatively tuned. The product is a black box. And the downstream cost of errors falls entirely on students, not on Turnitin's renewal rate. You don't need aggressive tuning for the incentive structure to be broken.

[1] https://www.insidehighered.com/news/quick-takes/2023/06/01/t...

[2] https://lawlibguides.sandiego.edu/c.php?g=1443311&p=10721367