Hey, thanks for taking the time to write such a thoughtful reply. I'm always open to counterarguments to what I'm saying, and happy to discuss them in a civil manner. I think such discussions are healthy, even without the expectation that we're going to convince one another.
I think our main disagreement is about what constitutes a "fingerprint", and whether CAPTCHAs can work without it.
Let's start from basic principles...
The "Turing test" in the CAPTCHA acronym is merely a vague historical descriptor of what these tools actually do. For one, the arbitrer in the original Turing test was a human. In contrast, the "Completely Automated" part means that the arbitrer in CAPTCHAs has to be a machine.
Secondly, the original Turing test involved a natural language conversation. This would be highly impractical in the context of web applications, and would also go against the "Completely Automated" part.
Furthermore, humans can be easily fooled by machines in such a test nowadays, as the original Turing test has been decidedly broken with recent AI advancements.
So taking all of this into account, since machines don't have reasoning capabilities (yet) to make the bot-or-not distinction in the same way that a human would, we have to instead provide them with inputs that they can actually process. This inevitably means that the more information we can gather about the user, the higher the accuracy of their predictions will be.
This is why I say that CAPTCHAs have to involve fingerprints _by definition_. They wouldn't be able to do their job otherwise.
Can we agree on this so far?
Now let's define what a fingerprint actually is. It's just a collection of data points about the user. In your example, the IP address and user agent are a couple of data points. The question is: are just these two alone enough information for a CAPTCHA to accurately do its job? The IP address can be shared by many users, and can be dynamic. The user agent can be easily spoofed, and is not reliable. So I think we can agree that the answer to that question is "no".
This means that we need much more information for a CAPTCHA to work. This is where device information, advanced heuristics and behavioral signals come into play. Is the user interacting with the page? How human-like are their interactions? Are there patterns in this activity that we've seen before? What device are they using (or claim to be using)? Can we detect a browser automation tool being used? All of these, and many more, data points go into making an accurate bot-or-not decision. We can't rely on any single data point in isolation, but all of them in combination gives us a better picture.
Now, this inevitably becomes a very accurate "fingerprint" of the user. Advertisers would love to get ahold of this data, and use it for tracking and targeting purposes. The difference is in how it is used. A privacy-conscious CAPTCHA implementation that follows regulations like the GDPR would treat this data as a liability rather than an asset. The data wouldn't be shared with anyone, and would be purged after it's not needed.
The other point I'd like to emphasize is that the internet is becoming more difficult and dangerous to use by humans. We're being overrun with bots. As I linked in my previous reply, an estimated 36% of all global traffic comes from bots. This is an insane statistic, which will only grow as AI becomes more accessible.
So all of this is to say that we need automated ways to tell humans and computers apart to make the internet safer and actually usable by humans, and CAPTCHAs are so far the best system we have for it. They're far from being perfect, and I doubt we'll ever reach that point. Can we do a better job at it? Absolutely. But the alternative of not using them is much, much worse. If you can think of a better way of solving these problems without CAPTCHAs, I'm all ears.
The examples you mention are logistical and systemic problems in organizations. Businesses need to be more aware of these issues, and how to best address them. They're not indicators of problems with CAPTCHAs themselves, but with how they're used and configured in organizations.
Sorry for the wall of text, but I hope I clarified some of my thoughts on this, and that we can find a middle ground somewhere. :) Cheers!
Another point I forgot to mention: it's certainly possible to not gather all these signals. We can present an actual puzzle to the user, confirm whether they solve it correctly, and use signals only from the interaction with the puzzle itself. There are two problems with this: it's incredibly annoying and disruptive to actual humans. Nobody wants to solve puzzles to access some content. This is also far from being a "Completely Automated" test... And the other problem is that machines have become increasingly good at solving these puzzles themselves. The standard image macro puzzle has been broken for many years. Object and audio recognition is now broken as well. You see some CAPTCHA implementations coming up with more creative puzzles, but these will all inevitably be broken as well. So puzzles are just not a user friendly or reliable way of doing bot detection.
I'm going to leave it at "agree to disagree".. But here's my wall of text anyway.
Until something more substantive is done to control who can fingerprint (let's assume this is even a reasonable solution), users are forced to deactivate fingerprinting, and Firefox can NOT roll it out by default (your captchas are the main blocker) - or even expose it as a user option in config and advertise it with caveats that you might get more challenges - right now you don't just get more challenges, you get a broken internet.
And, 36% of the internet bot activity is pretty meaningless. I personally have no problem if 90% of the internet is bot activity. We have an enormous amount of bot traffic on our websites - I would say the majority - and I don't block any of it that respects our terms - a ton of it is being obviously used to train LLMs or improve search engines - more power to them. And honestly there's probably an opportunity for monetisation here. Some of it is security scans. Whatever. That is not a problem. Non-human users of the internet will inevitably arise as integration does, and I've written many a bot myself. Abuse is the problem. There are ways to tackle abuse that aren't fingerprinting. Smarter heuristics (which are obviously not being used by the "captcha" companies or I would not be getting blocked on routine use of sites like FedEx or Drupal or my bank after following a link from that bank or service), hash cash, smarter actual turing tests that verify not "human-like" spoofable profiles, but actual human-like competence... without fingerprinting. What we have right now is laziness and the fact that fingerprinting is profitable so there is actually an incentive to discourage it by all parties involved. It'll never be perfect but what we have now is far far far from that.
I will say, BTW, that bots are not that hard to block. On a website I maintain we went from 1000+ bot accounts a month to 0 in many years, simply by adding an extra hand-rolled element to a generic captcha. The generic captchas are what bots bother to break in most cases. (that would probably not apply to massive services, but those also have the capacity to keep creating new custom ones, and be a moving target - probably would just require one programmer full-time really)
And yes, businesses need to implement it these "captcha" solutions better, but the people offering the solutions are not offering them with transparency as to the issues or clean integration with APIs. It's just get the contract, drop in front of all traffic, move on.
And, for god's sake, implement the captcha sanely. Don't require third party javascript, cookies, etc. Have the companies proxy it through their website so standard security and privacy measures don't block by default which happens almost all the time. In fact in many cases even the feedback when blocked, is also blocked facepalm. Don't block by default on a "suspicious" (i.e. generic) fingerprint as what happens quite often now. Actually SHOW a captcha so the user has a fighting chance and knows what is going on.