I wouldn't read too much into the LLM analogy. The interview is disappointingly short, filled with a bunch of unnecessarily tall photgraphs, and the interviewer, the one who brought up LLMs and ChatGPT and has a history of writing AI articles (https://www.quantamagazine.org/authors/john-pavlus/), almost seemed to have an agenda to contextualize the research in this way. In general, except in a hostile context such as politics, interviewees tend to be agreeable and cooperative with interviewers, which means that interviews can be steered in a predetermined way, probably for clickbait here.
In any case, there's a key disanalogy:
> Unlike a large language model, the human language network doesn’t string words into plausible-sounding patterns with nobody home; instead, it acts as a translator between external perceptions (such as speech, writing and sign language) and representations of meaning encoded in other parts of the brain (including episodic memory and social cognition, which LLMs don’t possess).
The disanalogy you quote might actually be the key insight. What if language operates at two levels, like Kahneman's System 1/2?
Level 1: Nearly autonomic — pattern-matched language that acts directly on the nervous system. Evidence: how insults land before you "process" them, how fluent speakers produce speech faster than conscious deliberation allows, and the entire body of work on hypnotic suggestion, which relies on language bypassing conscious evaluation entirely.
Level 2: The conscious formulation you describe — the translator between perception and meaning.
LLMs might be decent models of Level 1 but have nothing corresponding to Level 2. Fedorenko's "glorified parser" could be the Level 1 system.