logoalt Hacker News

TeMPOraLlast Wednesday at 11:30 PM1 replyview on HN

Right. But what if you dropped the human-facing UI, and instead exposed the backend (i.e. a database + CRUD API + heavy domain flavoring) to LLM as a tool? Suddenly you not only get a more reliable recognition (you're more likely to eat something that you've eaten before than completely new), but also the LLM can use this data to inform answers to other topics (e.g. diet recommendations, restaurant recommendations), or do arbitrary analytics on-demand, leveraging other tools at its disposal (e.g. Python, JS, SQL, Excel are the most obvious ones), etc. Suddenly the LLM would be more useful at maintaining shopping lists and cross-referencing with deals in local grocery stores - which actually subsumes several classes of apps people use. And so on.

> Just having an LLM is not the right UX for the vast majority of apps.

I argue it is, as most things people do in software doesn't need to be hands-on. Intuition pump: if you can imagine asking someone else - a spouse, a friend, an assistant - to use some app to do something for you, instead of using the app yourself, then turning that app into a set of tools for LLM would almost certainly improve UX.

But I agree it's not fully universal. If e.g. you want to browse the history of your meals, then having to ask an LLM for it is inferior to tapping a button and seeing some charts. My perspective is that tool for LLM > app when you have some specific goal you can express in words, and thus could delegate; conversely, directly operating an app is better when your goal is unclear or hard to put in words, and you just need to "interact with the medium" to achieve it.


Replies

satvikpendemlast Thursday at 7:13 AM

Your first paragraph are features I already have slated to work on, as I also ran into the same things, ie if I have 500 calories left for the day, what can I eat within that limit? But not sure why I need to ditch the UI entirely, my app would show the foods as a scrollable list and click on one to get more info. I suppose that is sort of replicating the LLM UI in a way, since it also produces lists of items, but apps with interactive UX over just typing still feels natural to most.

A solution could be, can the AI generate the UI then on the fly? That's the premise of generative UI, which has been floating around even on HN. Of course the issue with it is every user will get different UIs, maybe even in the same session. Imagine the placement of a button changing every time you use an app. And thus we are back to the original concept, a UX driven app that uses AI and LLMs as informational tools that can access other resources.