logoalt Hacker News

observationist01/21/20250 repliesview on HN

This is really cool - it's a relatively well known idea, but it's great to see it get refined and better understood. It's amazing how sparse the brain is; a single neuron can trigger a profound change in contextual relations, and play a critical role in how things get interpreted, remembered, predicted, or otherwise processed.

That single cell will have up to 10,000 features, and those features are implicitly processed; they're only activated if the semantic relevance of a particular feature surpasses some threshold of contribution to whatever it is you're thinking at a given moment. Each of those features is binary, either off or on at a given time t in processing. Compare this to artificial neural networks, where a particular notion or concept or idea is an embedding; if you had 10,000 features, each of those is activated and processed every pass. Attention and gating and routing and MoE get into sparsity and start moving artificial networks in the right direction, but they're still enormously clunky and inefficient compared to bio brains.

Implicit sparse distributed representation is how the brain can get to ~2% sparse activations, with rapid, precise, and deep learning of features in realtime, where learning one new thing can recontextualize huge swathes of knowledge.

These neurons also allow feats of memory, like learning the order of 10 decks of cards in 5 minutes, or reciting 1 million digits of pi, or cabbies learning "The Knowledge" and learning every street, road, sidwalk, alley, bridge, and other feature in London, able to traverse the terrain in their mind. It's wonderful that this knowledge is available to us, that the workings of our minds are becoming unveiled.