logoalt Hacker News

zackmorristoday at 3:26 PM3 repliesview on HN

This is very hacker-like thinking, using tech's biases against it!

I can't help but feel like we're all doing it wrong against scraping. Cloudflare is not the answer, in fact, I think that they lost their geek cred when they added their "verify you are human" challenge screen to become the new gatekeeper of the internet. That must remain a permanent stain on their reputation until they make amends.

Are there any open source tools we could install that detect a high number of requests and send those IP addresses to a common pool somewhere? So that individuals wouldn't get tracked, but bots would? Then we could query the pool for the current request's IP address and throttle it down based on volume (not block it completely). Possibly at the server level with nginx or at whatever edge caching layer we use.

I know there may be scaling and privacy issues with this. Maybe it could use hashing or zero knowledge proofs somehow? I realize this is hopelessly naive. And no, I haven't looked up whether someone has done this. I just feel like there must be a bulletproof solution to this problem, with a very simple explanation as to how it works, or else we've missed something fundamental. Why all the hand waving?


Replies

ATechGuytoday at 7:00 PM

Scrapers use residential IP proxies, so blocking based on IP addresses is not a solution.

dvfjsdhgfvtoday at 5:21 PM

Your approach to GenAI scrapers is similar to our fight with email spam. The reason email spam got solved was because the industry was interested in solving it. But this issue got the industry split: without scraping, GenAI tools are less functional. And there is some serious money involved, so they will use whatever means necessary, technical and legal, to fight such initiatives.

smegger001today at 6:25 PM

maybe some proof of work scheme to load page content with increasing difficulty based on ip address behavior profiling.