logoalt Hacker News

jwilberlast Wednesday at 10:56 PM1 replyview on HN

Achieved by “applying convolution operations over queries, keys and heads, allowing nearby queries and keys to affect each other's attention weights for more precise attention”

Cool to see convolutions making such a comeback lately in the llm world. See also the recent striped hyena2 architecture, which uses the conv-based hyena operator to great success:

https://arxiv.org/abs/2503.01868


Replies

janalsncmlast Wednesday at 11:45 PM

The null hypothesis is more compute or bigger network = better results. Conv operations make sense on images because the data is naturally 2 dimensional, so applying an operation across a sliding window makes sense.

Skimming the paper, I don’t see them testing against e.g. a normal decoder with an extra layer or something.

I don’t see the same logic applying on an embedding, where the individual indexes matter. Adjacent indexes in an embedding have no relationship, unlike adjacent pixels in an image.

show 2 replies