It's just a re-invention of kernel smoothing. Cosma Shalizi has an excellent write up on this [0].
Once you recognize this it's a wonderful re-framing of what a transformer is doing under the hood: you're effectively learning a bunch of sophisticated kernels (though the FF part) and then applying kernel smoothing in different ways through the attention layers. It makes you realize that Transformers are philosophically much closer to things like Gaussian Processes (which are also just a bunch of kernel manipulation).
0. http://bactra.org/notebooks/nn-attention-and-transformers.ht...