The escalation logic (lightweight fetch → full browser only when needed) is a nice optimization. That's exactly the kind of thing that's painful to build yourself.
Curious about debugging: when an agent's request fails or returns unexpected data, how do you surface what actually happened in the browser? We've found that visibility into the actual request/response chain is often the missing piece when debugging agent behavior.
Good call on the ethics stance with robots.txt and User-Agent identification.