logoalt Hacker News

keedatoday at 4:11 AM0 repliesview on HN

Fascinating read, even though I think the model deviations over time are more to do with context windows getting too large. If nothing else, worth reading for the references to quirks of human cognition and "free will."

The "interpreter" is a concept that I found especially intriguing within the context of a leading theory in cognition research called "Predictive Processing." Here, the brain is constantly operating in a tight closed loop of predicting sensory input using an internal model of the world, and course-correcting based on actual sensory input. Mostly incorrect predictions are used to update the internal model and then subconsciously discarded. Maybe the "interpreter" is the same mechanism applied to reconciling predictions about our own reasoning with our actual actions?

Even if the hypotheses in TFA are not accurate, it's very interesting to compare our brains to LLMs. This is why all the unending discussions about whether LLMs are "really thinking" are meaningless -- we don't even understand how we think!