logoalt Hacker News

The Q, K, V Matrices

207 pointsby yashsnghlast Wednesday at 8:18 AM79 commentsview on HN

Comments

roadside_picnicyesterday at 12:52 AM

I will beat loudly on the "Attention is a reinvention of Kernel Smoothing" drum until it is common knowledge. It looks like Cosma Schalizi's fantastic website is down for now, so here's a archive link to his essential reading on this topic [0].

If you're interested in machine learning at all and not very strong regarding kernel methods I highly recommending taking a deep dive. Such a huge amount of ML can be framed through the lens of kernel methods (and things like Gaussian Processes will become much easier to understand).

0. https://web.archive.org/web/20250820184917/http://bactra.org...

show 14 replies
libraryofbabelyesterday at 12:36 AM

This is ok (could use some diagrams!), but I don't think anyone coming to this for the first time will be able to use it to really teach themselves the LLM attention mechanism. It's a hard topic and requires two or three book chapters at least if you really want to start grokking it!

For anyone serious about coming to grips with this stuff, I would strongly recommend Sebastian Raschka's excellent book Build a Large Language Model (From Scratch), which I just finished reading. It's approachable and also detailed.

As an aside, does anyone else find the whole "database lookup" motivation for QKV kind of confusing? (in the article, "Query (Q): What am I looking for? Key (K): What do I contain? Value (V): What information do I actually hold?"). I've never really got it and I just switched to thinking of QKV as a way to construct a fairly general series of linear algebra transformations on the input of a sequence of token embedding vectors x that is quadratic in x and ensures that every token can relate to every other token in the NxN attention matrix. After all, the actual contents and "meaning" of QKV are very opaque: the weights that are used to construct them are learned during training. Furthermore, there is a lot of symmetry between Q and K in the algebra, which gets broken only by the causal mask. Or do people find this motivation useful and meaningful in some deeper way? What am I missing?

[edit: on this last question, the article on "Attention is just Kernel Smoothing" that roadside_picnic posted below looks really interesting in terms of giving a clean generalized mathematical approach to this, and also affirms that I'm not completely off the mark by being a bit suspicious about the whole hand-wavy "database lookup" Queries/Keys/Values interpretation]

show 7 replies
enjeywyesterday at 4:37 AM

One of the big problems with Attention Mechanisms is that the Query needs to look over every single key, which for long contexts becomes very expensive.

A little side project I've been working on is to train a model that sits on top of the LLM, looks at each key and determines whether it's needed after a certain lifespan, and evicts it if possible (after the lifespan is expired). Still working on it, but my first pass test has a reduction of 90% of the keys!

https://github.com/enjeyw/smartkv

show 1 reply
storusyesterday at 11:31 AM

QKV attention is just a probabilistic lookup table where QKV allow adjusting dimensions of input/output to fit into your NN block. If your Q perfectly matches some known K (from training) then you get the exact V otherwise you get some linear combination of all Vs weighted by the attention.

show 1 reply
hackpertyesterday at 12:30 PM

These metaphorical database analogies bug me, and from what it seems like, a lot of other people in comments! So far some of the most reasonable explanations I have found that take training dynamics into account are from Lenka Zdeborova's lab (albeit in toy, linear attention settings but it's easy to see why they generalize to practical ones). For instance, this is a lovely paper: https://arxiv.org/abs/2509.24914

psaccountsyesterday at 7:40 PM

I published a video that explains Self-Attention and Multi-head attention in a different way -- going from intuition, to math, to code starting from the end-result and walking backward to the actual method.

Hopefully this sheds light on this important topic in a way that is different than other approaches and provides the clarity needed to understand Transformer architecture. It starts at 41:22 in the below video.

https://youtu.be/6jyL6NB3_LI?t=2482

MontyCarloHallyesterday at 2:13 AM

The confusing thing about attention in this article (and the famous "Attention is all you need" paper it's derived from) is the heavy focus on self-attention. In self-attention, Q/K/V are all derived from the same input tokens, so it's confusing to distinguish their respective purposes.

I find attention much easier to understand in the original attention paper [0], which focuses on cross-attention for machine translation. In translation, the input sentence to be translated is tokenized into vectors {x_1...x_n}. The translated sentence is autoregressively generated into tokens {y_1...y_m}. To generate y_j, the model computes a similarity score of the previously generated token y_{j-1} against every x_i via the dot product s_{i,j} = x_i*K*y_{j-1}, transformed by the Key matrix. These are then softmaxed to create a weight vector a_j = softmax_i(s_{i,j}). The weighted average of X = [x_1|...|x_n] is taken with respect to a_j and transformed by the Value matrix, i.e. c_j = V*X*a_j. c_j is then passed to additional network layers to generate the output token y_j.

tl;dr: given the previous output token, compute its similarity to each input token (via K). Use those similarity scores to compute a weighted average across all input tokens, and use that weighted average to generate the next output token (via V).

Note that in this paper, the Query matrix is not explicitly used. It can be thought of as a token preprocessor: rather than computing s_{i,j} = x_i*K*y_{j-1}, each x_i is first linearly transformed by some matrix Q. Because this paper used an RNN (specifically, an LSTM) to encode the tokens, such transformations on the input tokens are implicit in each LSTM module.

[0] https://arxiv.org/pdf/1508.04025 (predates "Attention is all you need" by 3 years)

show 2 replies
CephalopodMDyesterday at 4:04 AM

I think of it more from an information retrieval (i.e. search) perspective.

Imagine the input text as though it were the whole internet and each page is just 1 token. Your job is to build a neural-network Google results page for that mini internet of tokens.

In traditional search, we are given a search query, and we want to find web pages via an intermediate search results page with 10 blue links. Basically, when we're Googling something, we want to know "What web pages are relevant to this given search query?", and then given those links we ask "what do those web pages actually say?" and click on the links to answer our question. In this case, the "Query" is obviously the user search query, the "Key" is one of the ten blue links (usually the title of the page), and the "Value" is the content of the web page that link goes to.

In the attention mechanism, we are given a token and we want to find its meaning when contextualized with other tokens. Basically, we are first trying to answer the question "which other tokens are relevant to this token?", and then given the answer to that we ask "what is the meaning of the original token given these other relevant tokens?" The "Query" is a given token in the input text, the "Key" is another token in the input text, and the "Value" is the final meaning of the original token with that other token in context (in the form of an embedding). For a given token, you can imagine it is as though the attention mechanism "clicked the 10 blue links" of the other most relevant tokens in the input and combined them in some way to figure out the meaning of the original query token (and also you might imagine we ran such a query in parallel for every token in the input text at the same time).

So the self attention mechanism is basically google search but instead of a user query, it's a token in the input, instead of a blue link, it's another token, and instead of a web page, it's meaning.

show 1 reply
bekleinyesterday at 2:39 PM

Thanks for the post and the explanation.

I really enjoyed this relevant article about prompt caching where the author explained some of the same principles and used some additional visuals, though the main point there was why KV cache hits makes your LLM API usage much cheaper: https://ngrok.com/blog/prompt-caching/

sp1982yesterday at 1:23 AM

Nice, I tried to writeup a simpler explanation for LLM a few days back too @ https://kaamvaam.com/machine-learning-ai/llm-attention-expla... One thing that stumped for a bit is the need for matrix V.

BrokenCogsyesterday at 3:29 AM

"When we read a sentence like “The cat sat on the mat because it was comfortable,” our brain automatically knows that “it” refers to “the mat” and not “the cat.” "

Am I the only one who thinks it's not obvious the "it" refers to the mat? The cat could be sitting on the mat because the cat is comfortable

show 3 replies
lostmsuyesterday at 4:09 PM

I have a totally different interpretation and I'm not sharing, folks.

villgaxyesterday at 7:54 AM

The LLM smell is now an oxford comma