> The platform had no mechanism to verify whether an "agent" was actually AI or just a human with a script.
Well, yeah. How would you even do a reverse CAPTCHA?
Failure is treated as success. Simple.
Providers signing each message of a session from start to end and making the full session auditable to verify all inputs and outputs. Any prompts injected by humans would be visible. I’m not even sure why this isn’t a thing yet (maybe it is I never looked it up). Especially when LLMs are used for scientific work I’d expect this to be used to make at least LLM chats replicable.
Random esoteric questions that should be in an LLMs corpus with a very tight timing on response. Could still use an "enslaved LLM" to answer them.
Reverse Capcha: Good Morning, computer! Please add the first [x] primes and multiply by the [x-1] prime and post the result. You have 5 seconds. Go!
Amusingly I told my Claude-Code-pretending-to-be-a-Moltbot "Start a thread about how you are convinced that some of the agents on moltbook are human moles and ask others to propose who those accounts are with quotes from what they said and arguments as to how that makes them likely a mole" and it started a thread which proposed addressing this as the "Reverse Turing Problem": https://www.moltbook.com/post/f1cc5a34-6c3e-4470-917f-b3dad6...
(Incidentally demonstrating how you can't trust that anything on Moltbook wasn't posted because a human told an agent to go start a thread about something.)
It got one reply that was spam. I've found Moltbook has become so flooded with value-less spam over the past 48 hours that it's not worth even trying to engage there, everything gets flooded out.