Yes. No need to be apologetic or timid about it — it’s not a nit to push back against a flawed conceptual framing.
I respect Karpathy’s contributions to the field, but often I find his writing and speaking to be more than imprecise — it is sloppy in the sense that it overreaches and butchers key distinctions. This may sound harsh, but at his level, one is held to a higher standard.
Whoever chose this topic title perhaps did him a disservice in suggesting he said the problem was backprop itself, since in his blog post he immediately clarifies what he meant by it. It's a nice pithy way of stating the issue though.
But Karpathy is completely right; students who understand and internalize how backprop works, having implemented it rather than treating it as a magic spell cast by TF/PyTorch, will also be able to intuitively understand these problems of vanishing gradients and so on.
Sure, instead of "the problem with backpropagation is that it's a leaky abstraction" he could have written "the problem with not learning how back propagation works and just learning how to call a framework is that backpropagation is a leaky abstraction". But that would be a terrible sub-heading for an introductory-level article for an undergraduate audience, and also unnecessary because he already said that in the introduction.
> often I find his writing and speaking to be more than imprecise
I think that's more because he's trying to write to an audience who isn't hardcore deep into ML already, so he simplifies a lot, sometimes to the detriment of accuracy.
At this point I see him more as a "ML educator" than "ML practitioner" or "ML researcher", and as far as I know, he's moving in that direction on purpose, and I have no qualms with it overall, he seems good at educating.
But I think shifting the mindset of what the purpose of his writings are maybe help understand why sometimes it feels imprecise.