logoalt Hacker News

roenxiyesterday at 12:20 PM9 repliesview on HN

That doesn't sound like much of an issue. Bob was already going to encounter problems that are too large and complex for him to solve, agents or otherwise. Life throws us hard problems. I don't recall if we even assumed Bob was unusually capable, he might be one of life's flounderers. I'd give good odds that if he got through a program with the help of agents he'll get through life achieving at least a normal level of success.

But there is also a more subtle thing, which is we're trending towards superintelligence with these AIs. At the point, Bob may discover that anything agents can't do, Alice can't do because she is limited by trying to think using soggy meat as opposed to a high-performance engineered thinking system. Not going to win that battle in the long term.

> The market will always value the exact things LLMs can not do, because if an LLM can do something, there is no reason to hire a person for that.

The market values bulldozers. Whether a human does actual work or not isn't particularly exciting to a market.


Replies

kelnosyesterday at 12:50 PM

> we're trending towards superintelligence with these AIs

The article addresses this, because, well... no we aren't. Maybe we are. But it's far from clear that we're not moving toward a plateau in what these agents can do.

> Whether a human does actual work or not isn't particularly exciting to a market.

You seem to be convinced these AI agents will continue to improve without bound, so I think this is where the disconnect lies. Some of us (including the article author) are more skeptical. The market values work actually getting done. If the AIs have limits, and the humans driving them no longer have the capability to surpass those limits on their own, then people who have learned the hard way, without relying so much on an AI, will have an advantage in the market.

I already find myself getting lazy as a software developer, having an LLM verify my work, rather than going through the process of really thinking it through myself. I can feel that part of my skills atrophying. Now consider someone who has never developed those skills in the first place, because the LLM has done it for them. What happens when the LLM does a bad job of it? They'll have no idea. I still do, at least.

Maybe someday the AIs will be so capable that it won't matter. They'll be smarter and more through and be able to do more, and do it correctly, than even the most experienced person in the field. But I don't think that's even close to a certainty.

show 2 replies
dandellionyesterday at 1:02 PM

> we're trending towards superintelligence with these AIs

I wouldn't count on that because even if it happens, we don't know when it ill happen, and it's one of those things where how close it looks to be is no indication of how close it actually is. We could just as easily spend the next 100 years being 10 years away from agi. Just look at fusion power, self driving cars, etc.

show 1 reply
b00ty4breakfastyesterday at 12:59 PM

>But there is also a more subtle thing, which is we're trending towards superintelligence with these AIs

do you have any evidence for that, though? Besides marketing claims, I mean.

show 1 reply
whateveracctyesterday at 3:30 PM

> That doesn't sound like much of an issue. Bob was already going to encounter problems that are too large and complex for him to solve, agents or otherwise.

I have literally never run into this in my career..challenges have always been something to help me grow.

mattmanseryesterday at 12:34 PM

The authors point went a little over your head.

It doesn't matter if Bob can be normal. There was no point to him being paid to be on the program.

From the article:

If you hand that process to a machine, you haven't accelerated science. You've removed the only part of it that anyone actually needed.

show 2 replies
ozimyesterday at 2:32 PM

Market values bulldozers for bulldozing jobs. No one is going to use bulldozers to mow a lawn.

If Bob is going to spend $500 in tokens for something I can do for $50.

I think Bob is not going to stay long in lawn mowing market driving a bulldozer.

uoaeiyesterday at 12:29 PM

"Things that have never been done before in software" has been my entire career. A lot of it requires specific knowledge of physics, modelling, computer science, and the tradeoffs involved in parsimony and efficiency vs accuracy and fidelity.

Do you have a solution for me? How does the market value things that don't yet exist in this brave new world?

ModernMechyesterday at 2:40 PM

> Not going to win that battle in the long term.

I would take that bet on the side of the wet meat. In the future, every AI will be an ad executive. At least the meat programming won't be preloaded to sell ads every N tokens.

wizzwizz4yesterday at 12:49 PM

From the article:

> There's a common rebuttal to this, and I hear it constantly. "Just wait," people say. "In a few months, in a year, the models will be better. They won't hallucinate. They won't fake plots. The problems you're describing are temporary." I've been hearing "just wait" since 2023.

We're not trending towards superintelligence with these AIs. We're trending towards (and, in fact, have already reached) superintelligence with computers in general, but LLM agents are among the least capable known algorithms for the majority of tasks we get them to do. The problem, as it usually is, is that most people don't have access to the fruits of obscure research projects.

Untrained children write better code than the most sophisticated LLMs, without even noticing they're doing anything special.

show 2 replies