It doesn't have to be reliable! It just has to flag things: "hey these graphs look like they were generated using (formula)" or "these graphs do not seem to represent realistic values / real world entrophy" - it just has to be a tool that stops very advanced fraud from slipping through when it already passed human peer review.
The only reason why this is helpful is because humans have natural biases and/or inverse of AI biases which allow them to find patterns that might just be the same graph being scaled up 5 to 10 times.
It doesn't have to be reliable! It just has to flag things: "hey these graphs look like they were generated using (formula)" or "these graphs do not seem to represent realistic values / real world entrophy" - it just has to be a tool that stops very advanced fraud from slipping through when it already passed human peer review.
The only reason why this is helpful is because humans have natural biases and/or inverse of AI biases which allow them to find patterns that might just be the same graph being scaled up 5 to 10 times.