Is it possible to ask the vision agent to "map" the UI and expose it to another agent as a set of interfaces that resemble an API better? From what I understand the vision agent now should both know that "next page" shows more results and that they need to get more results in the first place.
If one agent just explores the UI, maybe in a test environment, and outputs a somewhat-structured description of the various UI elements and their behavior, then another agent was given that description, would the other agent perform better that an agent that both explores the UI and tries to accomplish the given task at the same time?
With an example UI I made up, the description (API-like interface definition) could be something like:
Get all reviews:
To get all the reviews you need to go to each page and click "show full review" for every review summary in that page.
Go to each page:
Start at page 1 (the default when in the Reviews tab). Continue by clicking the "next" button until the "next" button is no longer available (as you've reached the last page).
So the second agent can skip some thinking about how to navigate because it already has that skill. The first agent can explore the UI on its own, once, without worrying about messing up if there's a test environment.Or am I misunderstanding the article completely? Probably. But it's interesting nonetheless. Sorry if it makes no sense.
>Is it possible to ask the vision agent to "map"
No most vision models focus on subset of an image at a time when using image -> text
image -> image uses whole image.
I think you're right, you can get agents to do what we do -- learn how a website works. Then expose that model as a simple API. There will still be some vision tasks for navigation but they will be just vision tasks, no thinking required.
That was my first thought as well. A lot of current web development relies heavily on code generation then has obsfuscation and compression slapped on top leading to complicated structures. Then on top of that, more code (client side/JavaScript) reconfigures everything again. You end up with fairly complicated html/css/JavaScript to wade through.
For better and worse, 5-10Mi isn't uncommon for a web app.
Instead of trying to go "bottom up" and, effectively, do what a browser engine is doing in reverse, it seems easier to go "top down" like a human does and go off the visual representation.