logoalt Hacker News

avaeryesterday at 11:55 PM1 replyview on HN

Since I didn't see it in the Readme, how does this compare to something like Google's A2UI? Seems like it's doing more, but could e.g. Tambo work on top of A2UI protocol or is it a different beast?

My agents need a UI and I'm in the market for a good framework to land on, but as is always the case in these kinds of interfaces I strongly suspect there will be a standard inter-compatible protocol underlying it that can connect many kinds of agents to many kinds of frontends. What is your take on that?


Replies

lachiehtoday at 2:02 AM

Hey! I'm an the Tambo team so I'll chip in. There isn't really any reason we couldn't support A2UI. It's a great way to allow models to describe generative UIs. We could add an A2UI renderer.

The way we elevator-pitch Tambo is "an agent that understands your UI" (which, admittedly, is not very descriptive on the implementation details). We've spent the time on allowing components (be that pre-existing or purpose-built) to be registered as tools that can be controlled and rendered either in-chat, or out within your larger application. The chat box shouldn't be the boundary.

Personally, my take on standards like A2UI is that they could prove useful but the models have to easily understand them or else you have to take up additional context explaining the protocol. Models already understand tool-calling so we're making use of that for now.