logoalt Hacker News

jdahlinyesterday at 5:51 PM4 repliesview on HN

One of the most interesting aspects is when LLMs are cheap and small enough so that apps can ship with a builtin one so that it can adjust code for each user based on input/usage patterns.


Replies

awepofiwaoptoday at 4:12 AM

The clear intent is to stop allowing regular people to be able to compute...anything. Instead, you'll be given a screen that only connects to $LLM_SERVER and the only interface will be voice/text in which you ask it to do things. It then does those things non-deterministically, and slower than they would be done right now. But at least you won't have control over how it works!

show 1 reply
candiddevmikeyesterday at 5:54 PM

If this could ever happen, there will be no point in GUI apps anymore, your AI assistant or what have you will just interact with everything on your behalf and/or present you with some kind of master interface for everything.

I don't see a bunch of small agents in the future, instead just one per device or user. Maybe there will be a fleeting moment for GUI/local apps to tie into some local, OS LLM library (or some kind of WebLLM spec) to leverage this local agent in your app.

show 2 replies
jazzypantsyesterday at 6:01 PM

I've heard this referenced multiple times and I have yet to hear the value be clearly articulated. Are you saying that every user would eventually be using a different app? Wouldn't it eventually get to the point that negates the need for the app developer anyways since you would eventually be unable to offer any kind of support, or are we just talking design changing while the actual functionality stays the same? How would something like this actually behave in reality?

show 2 replies
a_better_worldyesterday at 5:57 PM

LISP returns!