logoalt Hacker News

neomyesterday at 7:48 PM3 repliesview on HN

I almost emailed dang this morning to offer to help out tho I'm not particularly technical. Few solutions I thought of: 1 - honeypot, hide some links llms can follow if stuff gets posted in it, unlikely to be a human. 2 - Make an captcha that only llms can answer, I recently made 2 social networks, one that humans couldn't join by making the submission question too difficult to figure out quickly. 3- Use an LLM to detect LLMs, the other social network I did for fun (that a small number of people use), an llm that looks for moderation issues does a good job of flagging them. 4- Invites but vary the number you have to give out by account age + karma. The first 3 seem like they'd stop some % for some time, but eventually get old.


Replies

xaropetoday at 2:33 AM

you may have a point, i.e. some mechanism to invoke a behavior that only a bot or LLM could do, that a human would not, e.g. click on this button now in a hidden div/transparent color or measure response time within page load.

the problem is that once this is found out, the circumvention is easy enough to program into bots/LLMS.

are we going to reinvent the voight-kampff test from bladerunner?!?

fragmedeyesterday at 7:54 PM

Reverse captchas are fun. Click this button 10,000 times to prove that you're a robot!

vivid242yesterday at 8:29 PM

We need new ways to prove our humanness.

show 1 reply