> It almost sounds like you’re saying there’s essentially an LLM inside everyone’s brain. Is that what you’re saying?
>Pretty much. I think the language network is very similar in many ways to early LLMs, which learn the regularities of language and how words relate to each other. It’s not so hard to imagine, right?
Yet, completely glosses over the role of rhythm in parsing language. LLMs aren’t rhythmic at all, are they? Maybe each token production is a cycle, though… hmm…
I think it's obvious that she means that it's something _like_ LLMs in some aspects. You are correct in that rhythm and intonation are very important in parsing language. (And also an important cue when learning how to parse language!) It's clear that the human language network is not like LLM in that sense. However, it _is_ a bit like an _early_ LLM (remember GPT2?) in the sense that it can produce and parse language, not that it makes much deeper sense in it.