logoalt Hacker News

stuaxotoday at 8:13 AM9 repliesview on HN

Its insane really, anyone who has worked with LLMs for a bit and has an idea of how they work shouldn't think its going to lead to "AGI".

Hopefully some big players, like FB bankrupt themselves.


Replies

IanCaltoday at 10:07 AM

Tbh I find this view odd, and I wonder what people view as agi now. It used to be that we had extremely narrow pieces of AI and I remember being on a research project about architectures and just very basic “what’s going on?” was advanced. Understanding that someone asked a question, that would be solved by getting a book and being able to then go and navigate to the place the book was likely to be was fancy. Most systems could solve literally one type of problem. They weren’t just bad at other things they were fundamentally incapable of anything but an extremely narrow use case.

I can throw wide ranging problems at things like gpt5 and get what seem like dramatically better answers than if I asked a random person. The amount of common sense is so far beyond what we had it’s hard to express. It used to be always pointed out that the things we had were below basic insect level. Now I have something that can research a charity, find grants and make coherent arguments for them, read matrix specs and debug error messages, and understand sarcasm.

To me, it’s clear that agi is here. But then what I always pictured from it may be very different to you. What’s your image of it?

show 6 replies
thaawyy33432434today at 11:38 AM

Recently I realized that US are very close to a centrally planned economy. Meta wasted 50B on metaverse, which like how much Texas spends on healthcare. Now the "AI" investments seems dubious.

You could fund 1000+ projects with this kinds of money. This is not an effective capital allocation.

ameliustoday at 9:23 AM

The only way we'll have AGI is if people get dumber. Using modern tech like LLMs makes people dumber. Ergo, we might see AGI sooner than expected.

foobariantoday at 1:58 PM

I think something like we saw in the show "Devs" is much more likely, although what the developers did with it in the show was bonkers unrealistic. But some kind of big enough quantum device basically.

janalsncmtoday at 12:51 PM

I think AI research is like anything else really. The smartest people are heads down working on their problems. The people going on podcasts are less connected to day to day work.

It’s also pretty useless to talk about whether something is AGI without defining intelligence in the first place.

menaerustoday at 9:45 AM

> ... and has an idea of how they work shouldn't think its going to lead to "AGI"

Not sure what level of understanding are you referring to but having learned and researched about the pretty much all LLM internals I think this has led me exactly to the opposite line of thinking. To me it's unbelievable what we have today.

guardian5xtoday at 8:25 AM

Just scaling them up might not leat to "AGI", but they can still lead to AGI as a bridge.

meowfacetoday at 11:58 AM

This is not and has not been the consensus opinion. If you're not an AI researcher you shouldn't write as if you've set your confidence parameter to 0.95.

Of course it might be the case, but it's not a thing that should be expressed with such confidence.

blackhaztoday at 8:27 AM

Is it widely accepted that LLMs won't lead to AGI? I've asked Gemini, so it came up with four primary arguments for this claim, commenting on them briefly:

1) LLMs as simple "next token predictors" so they just mimicry thinking: But can it be argued that current models operate on layers of multiple depth and are able to actually understand by building concepts and making connections on abstract levels? Also, don't we all mimicry?

2) Grounding problem: Yes, models build their world models on text data, but we have models operating on non-textual data already, so this appears to be a technical obstacle rather than fundamental.

3) Lack of World Model. But can anyone really claim they have a coherent model of reality? There are flat-earthers, yet I still wouldn't deny them having AGI. People hallucinate and make mistakes all the time. I'd argue hallucinations is in fact the sign of an emerging intelligence.

4) Fixed learning data sets. Looks like this is now being actively solved with self-improving models?

I just couldn't find a strong argument supporting this claim. What am I missing?

show 2 replies