The answer should be obvious that its both.
Zurada was one of our AI textbook that makes it visual that right from a simple classifier to a large language model, we are mathematically creating a shape(, that the signal interacts with). More parameters would mean shape can be curved in more ways and more data means the curve is getting hi-definition.
They reach something with data, treating neural network as blackbox, which could be derived mathematically using the information we know.
[dead]
[flagged]
This reminds me of https://dnhkng.github.io/posts/rys/
David looks into the LLM finds the thinking layers and cut duplicates then and put them back to back.
This increases the LLM scores with basically no over head.
Very interesting read.