> Who out there is working on ... infinite history?
Many people are still working on improving RNNs, mostly in academia. Examples off the top of my head:
* RWKV: https://arxiv.org/abs/2006.16236 / https://arxiv.org/abs/2404.05892 https://arxiv.org/abs/2305.13048
* Linear attention: https://arxiv.org/abs/2503.14456
* State space models: https://arxiv.org/abs/2312.00752 / https://arxiv.org/abs/2405.21060
* Linear RNNs: https://arxiv.org/abs/2410.01201
Industry OTOH has gone all-in on Transformers.
> Industry OTOH has gone all-in on Transformers.
It's so annoying. Transformers keep improving and recurrent networks are harder to train so until we hit some real wall, companies don't seem eager to diverge. It's like lithium batteries improving easy faster than it was profitable to work on sodium ones, even though we unfortunately want the sodium ones to be better.
RNNs have two huge issues: - long context. Recurrence degrades the signal for the same reason that 'deep' nn architectures don't go much past 3-4 layers before you need residual connections and the like - (this is the big one) training performance is terrible since you can't parallelize them across a sequence like you can with causal masked attn in transformers
On the huge benefit side though you get: - guaranteed state size so perfect batch packing, perfect memory use, easy load/unload from a batch, O(1) of token gen so generally massive performance gains in inference. - unlimited context (well, no need for a concept of a position embedding or similar system)
Taking the best of both worlds is definitely where it is at for the future. An architecture that can train parallelized, has a fixed state size so you can load/unload and patch batches perfectly, unlimited context (with perfect recall), etc etc. That is the real architecture to go for.