The problem is I don't think every answer needs a mini-app. I'd argue there are very few answers that do.
For example, it feels like Google's featured snippet (quick answer box) but expanded. But the thing is, many people don't like the feature snippet, and there's a reason it doesn't appear for many queries - it doesn't contribute meaningfully to those.
This functionality is doing exactly the opposite of the process of building good web apps: Rather than "unpacking functionality" and making it specific for an audience, it "packs" all functionality into a generalized use case, at the cost of becoming extremely mediocre for each use case, which makes it precisely worse than any other tool you'd use for that job.
As a specific example, I clicked your apartments in LES search (https://www.phind.com/search/find-me-options-for-a-72e019ce-...) and it shows us just 4 listings...? It shows some arbitrary subset of all things I could find on StreetEasy, and then provides a subset of the search functionality, losing things such as days on market, neighborhood, etc.
It's a cool demo, but "on-demand software" is exactly "Solution-In-Search-of-a-Problem".
The difficult part you need to ask is, like feature snippet, what are the questions worth solving with this, and is the pain point big enough that it's worth solving?
Thanks for the feedback, and I agree that it is very much early days for this product category. To be clear, our goal is to make the software specific for an audience: you. What's exciting, though, is that models are rapidly improving at building on-demand software and this will directly benefit Phind. There are still many edge cases, but I think it will get better quickly.
I tend to agree: I don’t understand what the “one-off app” is trying to achieve. In the example of the rental apartment—the user specified the parameters in the query. Just apply them, right?
I offer this in the spirit of feeling like I’m missing something, not out of negativity—I just genuinely don't understand the proposition.
What’s the advantage of trying to extract and normalize features from already-messy data sources, then provide controls that duplicate the query, rather than just applying the query and returning the results? Isn’t the user turning to a natural-language LLM specifically to avoid operating idiosyncratic UI controls?
For that matter, it takes time to learn to use an interface effectively. To understand how what it says it’s doing connects to what it’s actually doing. I know I can always trust McMaster Carr’s filter controls, and I know I can never trust Amazon’s wacky random ones.
It seems to me that it’s much harder to pick the right controls and make them work correctly than it is to throw some controls in an interface. Maybe that’s what I’m missing: that just wiring in controls in the first place is the hard part for most people who don’t work in this space.
Is the idea here that I’d need to learn a brand new interface, and figure out whether I can trust it, with every query?