logoalt Hacker News

Rebuff5007yesterday at 9:27 AM9 repliesview on HN

Heres a definition. How impressive is the output relative to the input. And by input, I don't just mean the prompt, but all the training data itself.

Do you think someone who has only ever studied pre-calc would be able to work through a calculus book if they had sufficient time? how about a multi-variable calc book? How about grad level mathematics?

IMO intelligence and thinking is strictly about this ratio; what can you extrapolate from the smallest amount of information possible, and why? From this perspective, I dont think any of our LLMs are remotely intelligent despite what our tech leaders say.


Replies

kryogen1cyesterday at 11:36 AM

Hear, hear!

I have long thought this, but not had as good way to put it as you did.

If you think about geniuses like Einstein and ramanujen, they understood things before they had the mathematical language to express them. LLMs are the opposite; they fail to understand things after untold effort, training data, and training.

So the question is, how intelligent are LLMs when you reduce their training data and training? Since they rapidly devolve into nonsense, the answer must be that they have no internal intelligence

Ever had the experience of helping someone who's chronically doing the wrong thing, to eventually find they had an incorrect assumption, an incorrect reasoning generating deterministic wrong answers? LLMs dont do that; they just lack understanding. They'll hallucinate unrelated things because they dont know what they're talking about - you may have also had this experience with someone :)

show 2 replies
mycallyesterday at 9:52 AM

Animals think but come with instincts which breaks the output relative to the input test you propose. Behaviors are essentially pre-programmed input from millions of years of evolution, stored in the DNA/neurology. The learning thus typically associative and domain-specific, not abstract extrapolation.

A crow bending a piece of wire into a hook to retrieve food demonstrates a novel solution extrapolated from minimal, non-instinctive, environmental input. This kind of zero-shot problem-solving aligns better with your definition of intelligence.

tremonyesterday at 3:35 PM

I'm not sure I understand what you're getting at. You seem to be on purpose comparing apples and oranges here: for an AI, we're supposed to include the entire training set in the definition of its input, but for a human we don't include the entirety of that human's experience and only look at the prompt?

show 1 reply
lukebuehleryesterday at 12:55 PM

That an okay-ish definition, but to me this is more about whether this kind of "intelligence" is worth it, not whether it is intelligence itself. The current AI boom clearly thinks it is worth to put that much input to get the current frontier-model-level of output. Also, don't forget the input scales across roughly 1B weekly users at inference time.

I would say a good definition has to, minimally, take on the Turing test (even if you disagree, you should say why). Or in current vibe parlance, it does "feel" intelligent to many people--they see intelligence in it. In my book this allows us to call it intelligent, at least loosely.

skeeter2020yesterday at 2:38 PM

This feels too linear. Machines are great at ingesting huge volumes of data, following relatively simple rules and producing optimized output, but are LLMs sufficiently better than humans at finding windy, multi-step connections across seemingly unrelated topics & fields? Have they shown any penchant for novel conclusions from observational science? What I think your ratio misses is the value in making the targeted extrapolation or hypothesis that holds up out of a giant body of knowledge.

show 1 reply
hodgehog11yesterday at 11:58 AM

Yeah, that's compression. Although your later comments neglect the many years of physical experience that humans have as well as the billions of years of evolution.

And yes, by this definition, LLMs pass with flying colours.

show 1 reply
jononoryesterday at 5:24 PM

For more on this perspective, see the paper On the measure of intelligence (F. Chollet, 2019). And more recently, the ARC challenge/benchmarks, which are early attempts at using this kind of definition in practice to improve current systems.

roliszyesterday at 12:14 PM

Is the millions of years of evolution part of the training data for humans?

show 1 reply
fragmedeyesterday at 6:03 PM

There are plenty of humans that will never "get" calculus, despite numerous attempts at the class and countless hours of 1:1 tutoring. Are those people not intelligent? Do they not think? We could say yes they aren't, but by the metric of making money, plenty of people are smart enough to be rich, while college math professors aren't. And while that's a facile way of measuring someone's worth or their contribution to society (some might even say "bad"), it remains that even if someone cant understand calculus, some of them are intelligent enough to understand humans enough to be rich through some fashion that wasn't simply handed to them.

show 1 reply