logoalt Hacker News

ericmcerlast Wednesday at 5:47 PM24 repliesview on HN

People will want what LLMs can do they just don't want "AI". I think having it pervade products in a much more subtle way is the future though.

For example, if you close a youtube browser tab with a comment half written it will pop up an `alert("You will lose your comment if you close this window")`. It does this if the comment is a 2 page essay or "asdfasdf". Ideally the alert would only happen if the comment seemed important but it would readily discard short or nonsensical input. That is really difficult to do in traditional software but is something an LLM could do with low effort. The end result is I only have to deal with that annoying popup when I really am glad it is there.

That is a trivial example but you can imagine how a locally run LLM that was just part of the SDK/API developers could leverage would lead to better UI/UX. For now everyone is making the LLM the product, but once we start building products with an LLM as a background tool it will be great.

It is actually a really weird time, my whole career we wanted to obfuscate implementation and present a clean UI to end users, we want them peaking behind the curtain as little as possible. Now everything is like "This is built with AI! This uses AI!".


Replies

mossTechnicianlast Wednesday at 6:14 PM

> if you close a youtube browser tab with a comment half written it will pop up an `alert("You will lose your comment if you close this window")`. It does this if the comment is a 2 page essay or "asdfasdf". Ideally the alert would only happen if the comment seemed important but it would readily discard short or nonsensical input. That is really difficult to do in traditional software but is something an LLM could do with low effort.

I don't think that's a great example, because you can evaluate the length of the content of a text box with a one-line "if" statement. You could even expand it to check for how long you've been writing, and cache the contents of the box with a couple more lines of code.

An LLM, by contrast, requires a significant amount of disk space and processing power for this task, and it would be unpredictable and difficult to debug, even if we could define a threshold for "important"!

show 2 replies
wrllast Thursday at 7:34 PM

> Ideally the alert would only happen if the comment seemed important but it would readily discard short or nonsensical input. That is really difficult to do in traditional software but is something an LLM could do with low effort.

I read this post yesterday and this specific example kept coming back to me because something about it just didn't sit right. And I finally figured it out: Glancing at the alert box (or the browser-provided "do you want to navigate away from this page" modal) and considering the text that I had entered takes... less than 5 seconds.

Sure, 5 seconds here and there adds up over the course of a day, but I really feel like this example is grasping at straws.

show 3 replies
wavemodelast Wednesday at 6:45 PM

> Ideally the alert would only happen if the comment seemed important but it would readily discard short or nonsensical input.

That doesn't sound ideal at all. And in fact highlights what's wrong with AI product development nowadays.

AI as a tool is wildly popular. Almost everyone in the world uses ChatGPT or knows someone who does. Here's the thing about tools - you use them in a predictable way and they give you a predictable result. I ask a question, I get an answer. The thing doesn't randomly interject when I'm doing other things and I asked it nothing. I swing a hammer, it drives a nail. The hammer doesn't decide that the thing it's swinging at is vaguely thumb-shaped and self-destruct.

Too many product managers nowadays want AI to not just be a tool, they want it to be magic. But magic is distracting, and unpredictable, and frequently gets things wrong because it doesn't understand the human's intent. That's why people mostly find AI integrations confusing and aggravating, despite the popularity of AI-as-a-tool.

show 3 replies
slglast Wednesday at 6:41 PM

>For example, if you close a youtube browser tab with a comment half written it will pop up an `alert("You will lose your comment if you close this window")`. It does this if the comment is a 2 page essay or "asdfasdf". Ideally the alert would only happen if the comment seemed important but it would readily discard short or nonsensical input. That is really difficult to do in traditional software but is something an LLM could do with low effort. The end result is I only have to deal with that annoying popup when I really am glad it is there.

The funny thing is that this exact example could also be used by AI skeptics. It's forcing an LLM into a product with questionable utility, causing it to cost more to develop, be more resource intensive to run, and behave in a manner that isn't consistent or reliable. Meanwhile, if there was an incentive to tweak that alert based off likelihood of its usefulness, there could have always just been a check on the length of the text. Suggesting this should be done with an LLM as your specific example is evidence that LLMs are solutions looking for problems.

show 1 reply
ori_blast Thursday at 8:38 PM

> Ideally the alert would only happen if the comment seemed important but it would readily discard short or nonsensical input

No, ideally I would be able to predict and understand how my UI behaves, and train muscle memory.

If closing a tab would mean losing valuable data, the ideal UI would allow me to undo it, not try to guess if I cared.

show 1 reply
thwartedlast Wednesday at 5:56 PM

YouTube could use AI to not recommend videos I've already watched, which is apparently a really hard problem.

show 4 replies
ezstlast Thursday at 11:19 PM

You know what that reminds me very much of? That email client thing that asks you "did you forget to add an attachment?". That's been there for 3 decades (if not longer) before LLMs were a thing, so I'll pass on it and keep waiting for that truly amazing LLM-enabled capability that we couldn't dream of before. Any minute, now.

everdrivelast Wednesday at 6:09 PM

Using such an expensive technology to prevent someone from making a stupid mistake on a meaningless endeavor seems like a complete waste of time. Users should just be allowed to fail.

show 3 replies
publicdebateslast Thursday at 7:20 PM

> readily discard short or nonsensical input

When "asdfasdf" is actually a package name, and it's in reply to a request for an NPM package, and the question is formulated in a way that makes it hard for LLMs to make that connection, you will get a false positive.

I imagine this will happen more than not.

ambicapterlast Wednesday at 6:28 PM

So, like, machine learning. Remember when people used to call it AI/ML? Definitely wasn't as much money being spent on it back then.

nottorplast Wednesday at 9:04 PM

> The end result is I only have to deal with that annoying popup when I really am glad it is there.

Are you sure about that? It will trigger only for what the LLM declares important, not what you care about.

Is anyone delivering local LLMs that can actually be trained on your data? Or just pre made models for the lowest common denominator?

Wowfunhappylast Thursday at 2:42 AM

> For example, if you close a youtube browser tab with a comment half written it will pop up an `alert("You will lose your comment if you close this window")`. It does this if the comment is a 2 page essay or "asdfasdf". Ideally the alert would only happen if the comment seemed important but it would readily discard short or nonsensical input. That is really difficult to do in traditional software but is something an LLM could do with low effort.

I agree this would be a great use of LLMs! However, it would have to be really low latency, like on the order of milliseconds. I don't think the tech is there yet, although maybe it will be soon-ish.

nkrisclast Wednesday at 9:15 PM

It’s because “AI” isn’t a feature. “AI” without context is meaningless.

Google isn’t running ads on TV for Google Docs touting that it uses conflict-free replicated data types, or whatever, because (almost entirely) no one cares. Most people care the same amount about “AI” too.

gt0last Thursday at 6:34 AM

Would that be ideal though? Adding enormous complexity to solve a trivial problem which would work I'm sure 99.999% of the time, but not 100% of the time.

Ideally, in my view, is that the browser asks you if you are sure regardless of content.

I use LLMs, but that browser "are you sure" type of integration is adding a massive amount of work to do something that ultimately isn't useful in any real way.

thombleslast Wednesday at 8:00 PM

> you can imagine how a locally run LLM that was just part of the SDK/API developers could leverage would lead to better UI/UX

It’s already there for Apple developers: https://developer.apple.com/documentation/foundationmodels

I saw some presentations about it last year. It’s extremely easy to use.

bluedinoyesterday at 1:49 AM

I want AI to do useful stuff. Like comb through eBay auctions or Cars.com. Find the exact thing I want. Look at things in photos, descriptions, etc

I don't think an NPU has that capability.

bitwizelast Thursday at 8:41 PM

At my current work much of our software stack is based on GOFAI techniques. Except no one calls them AI anymore, they call it a "rules engine". Rules engines, like LLMs, used to be sold standalone and promoted as miracle workers in and of themselves. We called them "expert systems" then; this term has largely faded from use.

This AI summer is really kind of a replay of the last AI summer. In a recent story about expert systems seen here on Hackernews, there was even a description of Gary Kildall from The Computer Chronicles expressing skepticism about AI that parallels modern-day AI skepticism. LLMs and CNNs will, as you describe, settle into certain applications where they'll be profoundly useful, become embedded in other software as techniques rather than an application in and of themselves... and then we won't call them AI. Winter is coming.

show 1 reply
leonidasvyesterday at 1:51 AM

You don't need a LLM for that, a simple Markov Chain can solve that with a much smaller footprint.

expedition32last Wednesday at 9:15 PM

Honestly some of the recommendations to watch next I get on Netflix are pretty good.

No idea if they are AI Netflix doesn't tell and I don't ask.

AI is just a toxic brand at this point IMO.

show 1 reply
tliltocatllast Thursday at 9:17 PM

No. No-no-no-no-no. I want predictability. I don't want a black box with no tuning handles and no awareness of the context to randomly change the behavior of my environment.

show 1 reply
ryukopostinglast Thursday at 8:02 PM

Bingo. Nobody uses ChatGPT because it's AI. They use it because it does their homework, or it helps them write emails, or whatever else. The story can't just be "AI PC." It has to be "hey look, it's ChatGPT but you don't have to pay a subscription fee."

zzo38computerlast Thursday at 11:27 PM

Hopefully, you could make a browser extension to detect if a HTML form has unsaved changes; it should not require AI and LLM. (This will work better without the document including JavaScripts, but it is possible to work with JavaScripts too.)

themafialast Thursday at 9:32 PM

I want a functioning search engine. Keep your goofy opinionated mostly wrong LLM out of my way, please.