logoalt Hacker News

synapsomorphytoday at 4:10 AM2 repliesview on HN

This is somewhat disingenuous IMO. Language models do NOT explicitly tag parts of speech, or construct grammatical trees of relationships between words [1].

It also feels like motivated reasoning to make them seem dumb because in reality we mostly have no clue what algorithms are running inside LLMs.

> When you or I say "dog", we might recall the feeling of fur, the sound of barking [..] But when a model sees "dog", it sees a vector of numbers

when o3 or Gemini sees "dog", it might recall the feeling of fur, the sound of barking [..] But when a human says "dog", it sees electrical impulses in neurons

The stochastic parrot argument has been had a million times over and this doesn't feel like a substantial contribution. If you think vectors of numbers can never be true meaning then that means either (a) no amount of silicon can ever make a perfect simulation of a human brain, or (b) a perfectly simulated brain would not actually think or feel. Both seem very unlikely to me.

There are much better resources out there if you want to learn our best idea of what algorithms go on inside LLMs [2][3], it's a whole field called mechanistic interpretability, and it's way, way, way more complicated than tagging parts of speech.

[1] Maybe attention learns something like this, but it's doing a whole lot more than just that.

[2] https://transformer-circuits.pub/2025/attribution-graphs/bio...

[3] https://transformer-circuits.pub/2022/toy_model/index.html

P.S. The explainer has em dashes aplenty. I strongly prefer to see disclaimers (even if it's a losing battle) when LLMs are used heavily for writing especially for more technical topics like this.


Replies

AIPedanttoday at 4:32 PM

I nominally agree with this point - AGI is theoretically possible according to the Church-Turing thesis, we can “just” solve the Schrödinger for every atom in the human body.

The more salient point is that when a model reads “dog” it associates a bunch of text and images vaguely related to dogs. But when a human reads “dog” they associate their experiences with dogs, or other animals if they haven’t ever met a dog. In particular, cats who have met dogs also have some concept of “dog,” without using language at all. Humans share this intuitive form of understanding, and use it with text/speech/images to extend our understanding to things we haven’t encountered personally. But multimodal LLMs have no access to this form of intelligence, shared by all mammals, and in general they have no common sense. They can fake some common sense with huge amounts of text, but it is not reliable: the space of feline-level common sense deductions is not technically infinite, but it is incomprehensibly vast compared to the corpus of all human text and photographs.