Batching isn't about "moving weights around less". Where do you move the weights anyway once they are loaded into the GPU VRAM? Batching, as always in CS problems, is about maximizing the compute for a unit of a single round trip, and in this case DMA-context-from-CPU-RAM-to-GPU-VRAM.
Self attention premise is exactly that it isn't context free so it is also incorrect to say that batched requests do not mathematically affect each other. They do, and that's by design.
> Where do you move the weights anyway once they are loaded into the GPU VRAM?
The GPU can’t do anything with weights while they are in VRAM. They have to be moved into the GPU itself first.
So it is about memory round-trips, but not between RAM and VRAM. It’s the round trips between the VRAM and the registers in the GPU die. When batch processing, the calculations for all batched requests can be done while the model parameters are in the GPU registers. Compared to if they were done sequentially, you would multiply the number of trips between the VRAM and the GPU by the number of individual inferences.
Also, batched prompts and outputs are indeed mathematically independent from each other.