Are LLMs still trained by (variants of) stochastic GRADIENT descent? AFAIK what used to be called "backprop" is nowadays known as "automatic differentiation". It's widely used in PyTorch, JAX etc
Gradient descent doesn't matter here. Second order and higher methods still use lower order derivatives.
Back propagation is reverse mode auto differentiation. They are the same thing.
And for those who don't understand what back propagation is, it is just an efficient method to calculate the gradient for all parameters.
Gradient descent doesn't matter here. Second order and higher methods still use lower order derivatives.
Back propagation is reverse mode auto differentiation. They are the same thing.
And for those who don't understand what back propagation is, it is just an efficient method to calculate the gradient for all parameters.