logoalt Hacker News

wavemodelast Wednesday at 6:45 PM3 repliesview on HN

> Ideally the alert would only happen if the comment seemed important but it would readily discard short or nonsensical input.

That doesn't sound ideal at all. And in fact highlights what's wrong with AI product development nowadays.

AI as a tool is wildly popular. Almost everyone in the world uses ChatGPT or knows someone who does. Here's the thing about tools - you use them in a predictable way and they give you a predictable result. I ask a question, I get an answer. The thing doesn't randomly interject when I'm doing other things and I asked it nothing. I swing a hammer, it drives a nail. The hammer doesn't decide that the thing it's swinging at is vaguely thumb-shaped and self-destruct.

Too many product managers nowadays want AI to not just be a tool, they want it to be magic. But magic is distracting, and unpredictable, and frequently gets things wrong because it doesn't understand the human's intent. That's why people mostly find AI integrations confusing and aggravating, despite the popularity of AI-as-a-tool.


Replies

wredcolllast Wednesday at 8:22 PM

> The hammer doesn't decide that the thing it's swinging at is vaguely thumb-shaped and self-destruct.

Sawstop literally patented this and made millions and seems to have genuinely improved the world.

I personally am a big fan of tools that make it hard to mangle my body parts.

show 1 reply
ericmcerlast Thursday at 5:39 PM

But... A lot of stuff you rely on now was probably once distracting and unpredictable. There are a ton of subtle UX behaviors a modern computer is doing that you don't notice, but if they all disappeared and you had to use windows 95 for a week you would miss.

That is more what I am advocating for, subtle background UX improvements based on an LLMs ability to interpret a users intent. We had limited abilities to look at an applications state and try to determine a users intent, but it is easier to do that with an LLM. Yeah like you point out some users don't want you to try and predict their intent, but if you can do it accurately a high percentage of the time it is "magic".

show 5 replies
bluGilllast Wednesday at 7:00 PM

I want magic that works. Sometimes I want a tool to interrupt me! I know my route to work so I'm not going to ask how I should get there today - but 1% of the time there is something wrong with my plan (accident, construction...) and I want the tool to say something. I know I need to turn right to get someplace, but sometimes as a human I'll say left instead: confusing me and the driver where they don't turn right, and AI that realizes who made the mistake would help.

The hard part is the AI needs to be correct when it doesn't something unexpected. I don't know if this is a solvable problem, but it is what I want.

show 1 reply