Your first paragraph are features I already have slated to work on, as I also ran into the same things, ie if I have 500 calories left for the day, what can I eat within that limit? But not sure why I need to ditch the UI entirely, my app would show the foods as a scrollable list and click on one to get more info. I suppose that is sort of replicating the LLM UI in a way, since it also produces lists of items, but apps with interactive UX over just typing still feels natural to most.
A solution could be, can the AI generate the UI then on the fly? That's the premise of generative UI, which has been floating around even on HN. Of course the issue with it is every user will get different UIs, maybe even in the same session. Imagine the placement of a button changing every time you use an app. And thus we are back to the original concept, a UX driven app that uses AI and LLMs as informational tools that can access other resources.