logoalt Hacker News

BugsJustFindMetoday at 5:39 PM4 repliesview on HN

People are going to misinterpret this and overgeneralize the claim. This does not say that AI isn't reliable for things. It provides a method for quantifying the reliability for specific tasks.

You wouldn't say that a human who doesn't know how to read isn't reliable in everything, just in reading.

Counting is something that even humans need to learn how to do. Toddlers also don't understand quantity. If a 2 year old is able to count to even 10 it's through memorization and not understanding. It takes them like 2 more years of learning before they're able to comprehend things like numerical correspondence. But they do still know how to do other things that aren't counting before then.


Replies

Topfitoday at 6:17 PM

Respectfully, toddlers cannot output useable code or have otherwise memorised results to an immense number of maths equations.

What this points at is the abstraction/emergence crux of it all. Why does an otherwise very capable LLM such as the GPT-5 series, despite having been trained on vastly more examples of frontend code of all shapes, sizes and quality levels, struggle to abstract all that training data to the point where outputting any frontend that deviates from the clearly used examples?

If LLMs, as they are now, were comparable with human learning, there'd be no scenario where a model that can provide output solving highly advanced equations can not count properly.

Similarly, a model such as GPT-5 trained on nearly all frontend code ever committed to any repo online, would have internalised more than that one template OpenAI predominantly leaned on.

These models, I think at this point there is little doubt, are impressive tools, but they still do not generalise or abstract information in the way a human mind does. Doesn't make them less impactful for industries, etc. but it makes any comparison to humans not very suitable.

show 1 reply
coldteatoday at 5:45 PM

>Counting is something that even humans need to learn how to do

No human who can program, solve advanced math problems, or can talk about advanced problem domains at expert level, however, would fail to count to 5.

This is not a mere "LLMs, like humans, also need to be taught this" but points to a fundamental mismatch about how humans and LLMs learn.

(And even if they merely needed to be taught, why would their huge corpus fail to cover that "teaching", but cover way more advanced topics in math solving and other domains?)

nkrisctoday at 6:08 PM

You’re conflating counting and language.

Many animals can count. Counting is recognizing that the box with 3 apples is preferable to the one with 2 apples.

Yes, 2 year olds might struggle with the externalization of numeric identities but if you have 1 M&M in one hand and 5 in the other and ask which they want, they’ll take the 5.

LLMs have the language part down, but fundamentally can’t count.

show 1 reply
irishcoffeetoday at 5:44 PM

> Counting is something that even humans need to learn how to do. Toddlers also don't understand quantity. If they're able to count to even 10 it's through memorization and not understanding.

I completely agree with you. LLMs are regurgitation machines with less intellect than a toddler, you nailed it.

AI is here!