Good question. When I think about how people actually discover features, it's usually:
1) clicking through menus
2) reading docs/watching tutorials
3) getting hands-on help from a coworker or support person
Some apps try to do progressive disclosure as you get better at using them, but that's really hard to scale. Works okay for simpler apps but breaks down as complexity grows.
With generative UI, I think you're basically building option 3 directly into the app.
Users learn to just ask the app how to do something or describe their problem, and it surfaces the right tools or configures things for them.
Still early days though. I think users will also have to adopt new behaviors to get the most out of generative apps.
Good question. When I think about how people actually discover features, it's usually:
1) clicking through menus 2) reading docs/watching tutorials 3) getting hands-on help from a coworker or support person
Some apps try to do progressive disclosure as you get better at using them, but that's really hard to scale. Works okay for simpler apps but breaks down as complexity grows.
With generative UI, I think you're basically building option 3 directly into the app.
Users learn to just ask the app how to do something or describe their problem, and it surfaces the right tools or configures things for them.
Still early days though. I think users will also have to adopt new behaviors to get the most out of generative apps.