Well, so far it's being solved by fingerprinting everyone uniquely, and punishing people who use anti-fingerprinting with essentially unusable websites. So, the captcha is essentially window dressing.
I get that argument, as someone who uses those privacy-preserving methods. I've dealt with annoying CAPTCHAs for many years. The problem is that a CAPTCHA by definition is unable to do its job unless it can gather as much information as possible about the user. There are obvious privacy concerns here, but companies that operate under regulations like the GDPR are generally more conscious about this.
So what should be the correct behavior if the CAPTCHA can't gather enough information? Should it default to assuming the user is a bot or a human?
I think this decision should depend on each site, depending on how strict they want the behavior to be. So it's a configuration setting, rather than a CAPTCHA problem.
In a broader sense, think about the implications of not using a CAPTCHA. The internet is overrun with bots; they comprise an estimated 36% of global traffic[1]. Cases like ProductHunt are not unique, and we see similar bot statistics everywhere else. These numbers will only increase as AI gets more accessible, making the current web practically unusable for humans.
If you see a better alternative to CAPTCHAs I'd be happy to know about it, but to me it's clear that the path forward is for websites to detect who is or isn't a bot, and restrict access accordingly. So working on improving these tools, in both detection accuracy and UX, should be our main priority for mitigating this problem.
I get that argument, as someone who uses those privacy-preserving methods. I've dealt with annoying CAPTCHAs for many years. The problem is that a CAPTCHA by definition is unable to do its job unless it can gather as much information as possible about the user. There are obvious privacy concerns here, but companies that operate under regulations like the GDPR are generally more conscious about this.
So what should be the correct behavior if the CAPTCHA can't gather enough information? Should it default to assuming the user is a bot or a human?
I think this decision should depend on each site, depending on how strict they want the behavior to be. So it's a configuration setting, rather than a CAPTCHA problem.
In a broader sense, think about the implications of not using a CAPTCHA. The internet is overrun with bots; they comprise an estimated 36% of global traffic[1]. Cases like ProductHunt are not unique, and we see similar bot statistics everywhere else. These numbers will only increase as AI gets more accessible, making the current web practically unusable for humans.
If you see a better alternative to CAPTCHAs I'd be happy to know about it, but to me it's clear that the path forward is for websites to detect who is or isn't a bot, and restrict access accordingly. So working on improving these tools, in both detection accuracy and UX, should be our main priority for mitigating this problem.
[1]: https://investors.fastly.com/news/news-details/2024/New-Fast...