logoalt Hacker News

the_aftoday at 4:56 PM2 repliesview on HN

Playing devil's advocate here, I'm not antagonizing you but thinking out loud.

> if it exceed the context the agent does random stuff, that are often against simplicity and coherent logical structure.

That's a current technical limitation. Are you so sure it won't be overcome in the near/mid future?

> LLM has zero intention, and rely on you to decide what to build and more importantly not build

But work is being done to even remove or automate this layer, right? It can be hyperbole (in fact, it is) but aren't Anthropic et al predicting this? Why wouldn't your boss, or your boss' boss, do this instead of you? If they lack the judgment currently, are you so sure they cannot gain it, if they don't have to waste time learning how to code? If not now, what about soon-ish?

> At this current year and date, the AI does not automate me in anyway

Not now, granted. But what about soon? In other words, shouldn't you be worried as well as excited?


Replies

Aperockytoday at 6:43 PM

Well if you do nothing you should definitely be worried, because not using LLM is rapidly becoming untenable.

If you do a lot, you'll grow skeptical about some of the claims and hype, and have a sense of where this is leading to.

My position is that if someone use LLM a lot, they maybe right or wrong about the future of LLM. If they don't, then they definitely are not right or are only lucky.

My personal judgement is both of these are hard caps until they invented something that's not a transformer, start from scratch bascially.

show 1 reply
ChrisLTDtoday at 6:42 PM

Does it seem to you like those issues will be solved soon? Does your boss have the time to do this AI wrangling work on top of their other tasks even if they don't have to learn to code?

show 2 replies