logoalt Hacker News

lukebuehleryesterday at 9:15 AM3 repliesview on HN

If cannot the say they are "thinking", "intelligent" while we do not have a good definition--or, even more difficult, unanimous agreement on a definition--then the discussion just becomes about output.

They are doing useful stuff, saving time, etc, which can be measured. Thus also the defintion of AGI has largely become: "can produce or surpass the economic output of a human knowledge worker".

But I think this detracts from the more interesting discussion of what they are more essentially. So, while I agree that we should push on getting our terms defined, I think I'd rather work with a hazy definition, than derail so many AI discussion to mere economic output.


Replies

Rebuff5007yesterday at 9:27 AM

Heres a definition. How impressive is the output relative to the input. And by input, I don't just mean the prompt, but all the training data itself.

Do you think someone who has only ever studied pre-calc would be able to work through a calculus book if they had sufficient time? how about a multi-variable calc book? How about grad level mathematics?

IMO intelligence and thinking is strictly about this ratio; what can you extrapolate from the smallest amount of information possible, and why? From this perspective, I dont think any of our LLMs are remotely intelligent despite what our tech leaders say.

show 9 replies
felipeeriasyesterday at 11:30 AM

The discussion about “AGI” is somewhat pointless, because the term is nebulous enough that it will probably end up being defined as whatever comes out of the ongoing huge investment in AI.

Nevertheless, we don’t have a good conceptual framework for thinking about these things, perhaps because we keep trying to apply human concepts to them.

The way I see it, a LLM crystallises a large (but incomplete and disembodied) slice of human culture, as represented by its training set. The fact that a LLM is able to generate human-sounding language

show 3 replies
keiferskiyesterday at 9:21 AM

Personally I think that kind of discussion is fruitless, not much more than entertainment.

If you’re asking big questions like “can a machine think?” Or “is an AI conscious?” without doing the work of clarifying your concepts, then you’re only going to get vague ideas, sci-fi cultural tropes, and a host of other things.

I think the output question is also interesting enough on its own, because we can talk about the pragmatic effects of ChatGPT on writing without falling into this woo trap of thinking ChatGPT is making the human capacity for expression somehow extinct. But this requires one to cut through the hype and reactionary anti-hype, which is not an easy thing to do.

That is how I myself see AI: immensely useful new tools, but in no way some kind of new entity or consciousness, at least without doing the real philosophical work to figure out what that actually means.

show 2 replies