logoalt Hacker News

BDPWyesterday at 8:51 AM2 repliesview on HN

LLM's still make stuff up routinely about things like this so no there's no way this is a reliable method.


Replies

kachapopopowyesterday at 9:03 AM

It doesn't have to be reliable! It just has to flag things: "hey these graphs look like they were generated using (formula)" or "these graphs do not seem to represent realistic values / real world entrophy" - it just has to be a tool that stops very advanced fraud from slipping through when it already passed human peer review.

The only reason why this is helpful is because humans have natural biases and/or inverse of AI biases which allow them to find patterns that might just be the same graph being scaled up 5 to 10 times.

show 1 reply
ratg13yesterday at 9:04 AM

Nobody should be using AI as the final arbiter of anything.

It is a tool, and there always needs to be a user that can validate the output.