Our benchmarks of public datasets put our FPR roughly around 1 in 10,000. https://www.pangram.com/blog/all-about-false-positives-in-ai...
Find me a clean public dataset with no AI involvement and I will be happy to report Pangram's false positive rate on it.
That's your job, what the actual fuck?
> Here, we've got a tool that people rightfully call out as dangerous pseudoscience. > Oh? You want proof it isn't dangerous pseudo-science? Well, get me my proovable information and i will!
This attitude alone is all the proof anyone should need that ai detection is about the only thing more debased than undisclosed ai use.
I enjoyed this thoughtful write up. It's a vitally important area for good, transparent work to be done.