This will be one of the big fights of the next couple years. On what terms can an Agent morally and legally claim to be a user?
As a user I want the agent to be my full proxy. As a website operator I don’t want a mob of bots draining my resources.
Perhaps a good analogy is Mint and the bank account scraping they had to do in the 2010s, because no bank offered APIs with scoped permissions. Lots of customers complained, and after Plaid made it big business, eventually they relented and built the scalable solution.
The technical solution here is probably some combination of offering MCP endpoints for your actions, and some direct blob store access for static content. (Maybe even figuring out how to bill content loading to the consumer so agents foot the bill.)
> As a user I want the agent to be my full proxy. As a website operator I don’t want a mob of bots draining my resource
The entire distinction here is that as a website operator you wish to serve me ads. Otherwise, an agent under my control, or my personal use of your website, should make no difference to you.
I do hope this eventually leads to per-visit micropayments as an alternative to ads.
Cloudflare, Google, and friends are in unique position to do this.
Well we call them browser agents for a reason, a sufficiently advanced browser is no different from an agent.
Agree it will become a battleground though, because the ability for people to use the internet as a tool (in fact, their tool’s tool) will absolutely shift the paradigm, undesirably for most of the Internet, I think.
I have a product I built that uses some standard automation tools to do order entry into an accounting system. Currently my customer pays people to manually type the orders in from their web portal. The accounting system is closed and they don’t allow easy ways to automate these workflows. Automation is gated behind mega expensive consultants. I’m hoping in the arms race of locking it down to try to prevent 3rd party integration the AI operator model will end up working.
Hard for me to see how it’s ethical to force your customers to do tons of menial data entry when the orders are sitting right there in json.
One solution: Some sort of checksum confirming that a bot belongs to a human (and which human)?
I want to able to automate mundane tasks but I should still be confirming everything my bot does and be liable for its actions.
Actually, the whole banking analogy is a great one, and its not over yet: JPMorgan/Jamie Dimon has started raising hell about Plaid again just this week [1]. It feels like the stage is being set for the large banks to want a more direct relationship with their customers, rather than proxying data through middlemen like Plaid.
There's likely a correlate with AI here: If I run OpenTable, I wouldn't want my relationship with my customers to always be proxied through OpenAI or Siri. Even the App Store is something software businesses hate, because it obfuscates their ability to deal directly with their customers (for better or worse). Extremely few businesses would choose to do business through these proxies, unless they absolutely have to; and given the extreme competition in the AI space right now, it feels unlikely to me that these businesses feel pressure to be forced to deal with OpenAI/etc.
[1] https://www.cnbc.com/2025/07/28/jpmorgan-fintech-middlemen-p...
real problems for people who need to verify identity/phone numbers. OTPs are notorious for scammers to war dial phone numbers abusing it for numbers existence.
We got hit from human verifiers manually war dailing us, this is with account creation, email verify and captcha. I can only imagine how much worse it'll be for us (and Twilio) to do these verifications.
I believe this is non issue, you place captcha to make bypassing it much more costly and less profitable to abuse.
LLM models are much harder to drive than any website to serve, so you do not expect mob of bots.
Also keep in mind that this no interaction captchas use behavioral data that are collected in background. Plus you usually have sensitivity levels configured. depending on your use case you might want user proof not being a bot or it might be good enough to just not provide evidence for being one.
bypassing this no interaction captcha can be also purchased as a service, they basically (AFAIK) reuse someone else session for captcha bypass.
My personal take about such questions has always been that the end user on their device can do whatever they want with the content published and sent to their device from a web server, may process it automatically in any way they wish and send their responses back to the web server. Any attempt to control this process means attempting to wiretap and control the user's endpoint device, and therefore should be prohibited.
Just my 2 cents, obviously lawmakers and jurisdiction may see these issues differently.
I suppose there will be a need for reliable human verification soon, though, and unfortunately I can't see any feasible technical solution that doesn't involve a hardware device. However, a purely legal solution might work well enough, too.
Perhaps the question is, as a website operator how am I monetizing my site? If monetizing via ads then I need humans that might purchase something to see my content. In this situation, the only viable approach in my opinion is to actually charge for the content. Perhaps it doesn't even make sense to have a website anymore for this kind of thing and could be dumped into a big database of "all" content instead. If a user agent uses it in a response, the content owner should be compensated.
If your site is not monetized by ads then having an LLM access things on the user's behalf should not be a major concern it seems. Unless you want it to be painful for users for some reason.
It will also accelerate the trend of app-only content, as well as ubiquitous identity verification and environment integrity enforcement.
Human identity verification is the ultimate captcha, and the only one AGI can never beat.
I don't know if customer sentiment was the driver you think. Instead it was regulation, specifically The EU's 2nd Payment Services Directive (PSD2) which forced banks to open up APIs.
Ultimately I come back to needing real actual unique human ID that involves federal governments. Not that services should mandatorily only allow users that use it, but for services that say "no, I only want real humans" allowing them to ban people by Real ID would reduce this whack-a-mole to the people who are abusing them instead of the infinite accounts an AI can make.
The most intrusive, yet simplest, protection would be a double blind token unique to every human. Basically an ID key you use to show yourself as a person.
There are some very real and obvious downsides to this approach, of course. Primarily, the risk of privacy and anonymity. That said, I feel like the average person doesn't seem to care about those traits in the social media era.
The scraping example, I would say, is not an analogy, but an example of the same thing. The only thing AI automation changes is the scope and depth and pervasiveness of automation that becomes possible. So while we could ignore automation in many cases before, it may no longer be practical to do so.
On the other hand one could cripple any bot by saying robots not allowed.
I would maybe go in the direction to say that the wording “I’m not a robot” has fallen out of time.
a user of the AI is the user... its not like they are autonomously operating and inventing their own tasking -_-
as for a solution its the same for any automated thing u dont want. (bots / scrapers). you can implement some measures but are unlikely to 'defeat' the problem entirely.
as a server operator you can try to distinguish stuff and the users will just find ways around your detection of if its an automation or not.
The solution is simple, make people pay a small fee to access the content. You guys aren't ready for that conversation though.
I guess it could be considered anti-circumvention under the DMCA. So maybe legally it becomes another copyright question.
User: one press of the trigger => one bullet fired
Bot: one press of the trigger => automatic firing of bullets
To me, anyone using an agent is assigning negative value to your time.
[dead]
> As a website operator I don’t want a mob of bots draining my resources
so charge for access. If the value the site provides is high, surely these mobs will pay for it! It will also remove the mis-incentives of advertising driven revenues, which has been the ill of the internet (despite it being the primary revenue source).
And if a bot misbehaves by consuming inordinate amounts of resources, rate limiting them with increasing timeouts or limits.
It's impossible to solve. A sufficient agent can control a device that records the user's screen and interacts with their keyboard/mouse, and current LLMs basically pass the Turing test.
IMO it's not worth solving anyways. Why do sites have CAPTCHA?
- To prevent spam, use rate limiting, proof-of-work, or micropayments. To prevent fake accounts, use identity.
- To get ad revenue, use micropayments (web ads are already circumvented by uBlock and co).
- To prevent cheating in games, use skill-based matchmaking or friend-group-only matchmaking (e.g. only match with friends, friends of friends, etc. assuming people don't friend cheaters), and make eSport players record themselves during competition if they're not in-person.
What other reasons are there? (I'm genuinely interested and it may reveal upcoming problems -> opportunities for new software.)