logoalt Hacker News

IanCalyesterday at 10:07 AM6 repliesview on HN

Tbh I find this view odd, and I wonder what people view as agi now. It used to be that we had extremely narrow pieces of AI and I remember being on a research project about architectures and just very basic “what’s going on?” was advanced. Understanding that someone asked a question, that would be solved by getting a book and being able to then go and navigate to the place the book was likely to be was fancy. Most systems could solve literally one type of problem. They weren’t just bad at other things they were fundamentally incapable of anything but an extremely narrow use case.

I can throw wide ranging problems at things like gpt5 and get what seem like dramatically better answers than if I asked a random person. The amount of common sense is so far beyond what we had it’s hard to express. It used to be always pointed out that the things we had were below basic insect level. Now I have something that can research a charity, find grants and make coherent arguments for them, read matrix specs and debug error messages, and understand sarcasm.

To me, it’s clear that agi is here. But then what I always pictured from it may be very different to you. What’s your image of it?


Replies

whizzteryesterday at 11:19 AM

It's more that "random" people are dumb as bricks (but we've in the name of equality and historic measurement errors decided to forgo that), add to it that AI's have a phenomenal (internet sized) memory makes them far more capable than many people.

However, even "dumb" people can often make judgements structures in a way that AI's cannot, it's just that many have such a bad knowledge-base that they cannot build the structures coherently whereas AI's succeed thanks to their knowledge.

I wouldn't be surprised if the top AI firms today spend an inordinate amount of time to build "manual" appendages into the LLM systems to cater to tasks such as debugging to uphold the facade that the system is really smart, while in reality it's mostly papering up a leaky model to avoid losing the enormous investments they need to stay alive with a hope that someone on their staff comes up a real solution to self-learning.

https://magazine.sebastianraschka.com/p/understanding-reason...

Yoricyesterday at 10:45 AM

I think it's clear that nobody agrees what AGI is. OpenAI describes it in terms of revenue. Other people/orgs in terms of, essentially, magic.

If I had to pick a name, I'd probably describe ChatGPT & co as advanced proof of concepts for general purpose agents, rather than AGI.

show 1 reply
boppo1yesterday at 10:57 AM

Human-level intelligence. Being able to know what it doesn't know. Having a practical grasp on the idea of truth. Doing math correctly, every time.

I give it a high-res photo of a kitchen and ask it to calculate the volume of a pot in the image.

show 3 replies
homarpyesterday at 11:32 AM

my picture of AGI is 1) autonomous improvement 2) ability to say 'i don't know/can't be done'

show 1 reply
adwnyesterday at 10:22 AM

I think the discrepancy between different views on the matter mainly stems from the fact that state-of-the-art LLMs are better (sometimes extremely better) at some tasks, and worse (sometimes extremely worse) at other tasks, compared to average humans. For example, they're better at retrieving information from huge amounts of unstructured data. But they're also terrible at learning: any "experience" which falls out of the context window is lost forever, and the model can't learn from its mistakes. To actually make it learn something requires very many examples and a lot of compute, whereas a human can permanently learn from a single example.

show 1 reply
AlienRobotyesterday at 10:41 AM

Nobody is saying that LLM's don't work like magic. I know how neural networks work and they still feel like voodoo to me.

What we are saying is that LLM's can't become AGI. I don't know what AGI will look like, but it won't look like an LLM.

There is a difference between being able to melt iron and being able to melt tungsten.