logoalt Hacker News

billisonlineyesterday at 2:07 AM5 repliesview on HN

An engine performs a simple mechanical operation. Chess is a closed domain. An AI that could fully automate the job of these new hires, rather than doing RAG over a knowledge base to help onboard them, would have to be far more general than either an engine or a chessbot. This generality used to be foregrounded by the term "AGI." But six months to a year ago when the rate of change in LLMs slowed down, and those exciting exponentials started to look more like plateauing S-curves, executives conveniently stopped using the term "AGI," preferring weasel-words like "transformative AI" instead.

I'm still waiting for something that can learn and adapt itself to new tasks as well as humans can, and something that can reason symbolically about novel domains as well as we can. I've seen about enough from LLMs, and I agree with the critique that som type of breakthrough neuro-symbolic reasoning architecture will be needed. The article is right about one thing: in that moment AI will overtake us suddenly! But I doubt we will make linear progress toward that goal. It could happen in one year, five, ten, fifty, or never. In 2023 I was deeply concerned about being made obsolete by AI, but now I sleep pretty soundly knowing the status quo will more or less continue until Judgment Day, which I can't influence anyway.


Replies

rukuu001yesterday at 7:19 AM

I think a lot about how much we altered our environment to suit cars. They're not a perfect solution to transport, but they've been so useful we've built tons more road to accommodate them.

So, while I don't think AGI will happen any time soon, I wonder what 'roads' we'll build to squeeze the most out of our current AI. Probably tons of power generation.

show 4 replies
creshalyesterday at 8:28 AM

> executives conveniently stopped using the term "AGI," preferring weasel-words like "transformative AI" instead.

Remember when "AGI" was the weasel word because 1980s AI kept on not delivering?

rvzyesterday at 2:32 AM

Remember, these companies (including the author) have an incentive to continue selling fear of job displacement not because of how disruptive LLMs are, but because of how profitable it is if you scare everyone into using your product to “survive”.

To companies like Anthropic, “AGI” really means: “Liquidity event for (AI company)” - IPO, tender offer or acquisition.

Afterwards, you will see the same broken promises as the company will be subject to the expectations of Wall St and pension funds.

show 1 reply
cubefoxyesterday at 12:42 PM

> I'm still waiting for something that can learn and adapt itself to new tasks as well as humans can

That's highly irrelevant because if it were otherwise, we would already be replaced. The article was talking about the future.

show 1 reply
littlestymaaryesterday at 1:41 PM

> An engine performs a simple mechanical operation

It only appears “simple” because you're used to see working engines everywhere without never having to maintain them, but neither the previous generations nor the engineers working on modern engines would agree with you on that.

An engine performs “a simple mechanical operation” the same way an LLM performs a “simple computation”.