I don’t love these “X is Bayesian” analogies because they tend to ignore the most critical part of Bayesian modeling: sampling with detailed with detailed balance.
This article goes into the implicit prior/posterior updating during LLM inference; you can even go a step further and directly implement hierarchical relationships between layers with H-Nets. However, even under an explicit Bayesian framework, there’s a stark difference in robustness between these H-Nets and the equivalent Bayesian model with the only variable being the parameter estimation process. [1]
Not a professional, but an avid researcher/reader.
These papers look promising, but a few initial strikes - first, the research itself was clearly done with agentic support; I'd guess from the blog post and the papers that actually the research was done by agents with human support. Lots of persistent give aways like overcommitting to weird titles like "Wind Tunnel" and all of the obvious turns of phrase in the medium post unfortunately carry on into the papers themselves. This doesn't mean they're wrong but I do think it means what they have is less info dense and less obviously correct, given today's state of the art with agentic research.
Upshot of the papers, there's one claim - each layer of a well trained transformer network allows a bayesian 'update' and selection of "truth" or preference of the model; deeper layers in the architecture = more accuracy. Thinking models = a chance to refresh the context and get back to the start of the layers to do further refinement.
There's a followup claim - that thinking about what the models are doing as solely updating weights for this bayesian process will get more efficient training.
Data in the paper - I didn't read deeply enough to decide if this whole "it's all Bayes all the way down" seems true to me. they show that if you ablate single layers then accuracy drops. But that is not news.
They do show significantly faster (per round) loss reduction using EM training vs SGD, but they acknowledge this converges to the same loss eventually (although their graphs do not show this convergence, btw), and crucially they do absolutely no reporting on compute required, or comparison with more modern methods.
Upshot - I think I'd skip this and kind of regret the time I spent reading the papers. Might be true, but a) so what, and b) we don't have anything falsifiable or genuinely useful out of the theory. Maybe if we could splice together different models in a new and cool way past merging layers, then I'd say we have something interesting out of this.
Pretty interesting. The posterior matching is a big deal, but I'm not convinced by the handwaiving required to demonstrate it in larger models. I'm interested in seeing how direct EM training scales though.
sure, but this stuff is only obvious post hoc. so many people have tried to "justify" the attention mechanism according to their area of expertise, but none of them came up with it first; ML engineers with ML thinking did.
Last time I look into SoTA Bayesian deep learning, Bayesian output layers seems the most promising and practical. Is that still the case?
Found it interesting and engaging, but having a CS professor at Colombia putting their name to AI “slop” is a bit unnerving. If they are writing papers for work you would hope they would enjoy the process of thinking and writing (journaling) instead of using ChatGPT.
Just skimming, noticed lots of em dashes, interesting :).
I've read through most of the first paper mentioned.
Here, the authors have taken set up two synthetic experiments where transformers have to learn the probability of observing events from a sampled from a "ground truth" Bayesian model. If the probability assigned by the transformers to the event space matches the Bayesian posterior predictive distribution, then the authors infer that the model is performing Bayesian inference for these tasks. Furthermore, they use this to argue that transformers are performing Bayesian inference in general (belief-propagation throughout layers).
The transformers are trained on thousands of different "ground truth" Bayesian models, each randomly initialized which means that there's no underlying signal to be learned besides the belief propagation mechanism itself. This makes me wonder if any sufficiently powerful maximum likelihood-based model would meet this criteria of "doing Bayesian inference" in this scenario.
The transformers in this paper do not intrinsically know to perform inference due to the fact that they're transformers. They perform inference because the optimal solution to the problems in the experiments is specifically to do inference, and transformers are powerful enough to model belief propagation. I find it hard to extrapolate that this is what is happening for LLMs, for example.