logoalt Hacker News

felipeeriasyesterday at 11:30 AM3 repliesview on HN

The discussion about “AGI” is somewhat pointless, because the term is nebulous enough that it will probably end up being defined as whatever comes out of the ongoing huge investment in AI.

Nevertheless, we don’t have a good conceptual framework for thinking about these things, perhaps because we keep trying to apply human concepts to them.

The way I see it, a LLM crystallises a large (but incomplete and disembodied) slice of human culture, as represented by its training set. The fact that a LLM is able to generate human-sounding language


Replies

roenxiyesterday at 12:42 PM

Not quite pointless - something we have established with the advent of LLMs is that many humans have not attained general intelligence. So we've clarified something that a few people must have been getting wrong, I used to think that the bar was set so that almost all humans met it.

show 2 replies
lukebuehleryesterday at 1:03 PM

I agree that the term can muddy the waters, but as a shorthand for roughly "an agent calling an LLM (or several LLMs) in a loop producing similar economic output as a human knowledge-worker", then it is useful. And if you pay attention to the AI leaders, then that's what the defintion has become.

idiotsecantyesterday at 1:05 PM

I think it has a practical, easy definition. Can you drop an AI into a terminal, give it the same resources as a human, and reliably get independent work product greater than that human would produce across a wide domain? If so, it's an AGI.

show 1 reply