logoalt Hacker News

jmward01today at 12:06 AM1 replyview on HN

I built guided window attn (literally predict the position of the window) a while ago and that works great. Why are we still stuck on any form of attn that looks at the entire context in any meaningful way? Do humans work this way? Do I need a whole book to predict the next word? Who out there is working on really new unique ways to deal with infinite history, other than me of course :)


Replies

cs702today at 12:28 AM

> Who out there is working on ... infinite history?

Many people are still working on improving RNNs, mostly in academia. Examples off the top of my head:

* RWKV: https://arxiv.org/abs/2006.16236 / https://arxiv.org/abs/2404.05892 https://arxiv.org/abs/2305.13048

* Linear attention: https://arxiv.org/abs/2503.14456

* State space models: https://arxiv.org/abs/2312.00752 / https://arxiv.org/abs/2405.21060

* Linear RNNs: https://arxiv.org/abs/2410.01201

Industry OTOH has gone all-in on Transformers.

show 2 replies