Idk to me this is just redescribing what deep neural networks do without actually explaining why anything happens. I guess it "unifies" things but I am kinda over most unifying theories. Everything is Bayesian, everything is a graph or a group or some other fancy geometric structure, everything is a category. Ultimately the best framework is whatever is useful enough to explain what's happening in such a way that a practitioner can manipulate the model towards a desired outcome. In other words, where is the knob? The tool they share may be interesting and I hope to play with it to see what happens at different levels of noise applied to the labels.
A real theory would predict phenomena thus far unseen. We already know about this 4 part taxonomy.
[dead]
We're still in the era of room-sized-computers-only-scientists-understand era of the neural networks. Knobs and buttons for nerds are slowly coming.