Typically I think, but you could pre-train your previous model on new data too.
I don’t think it’s publicly known for sure how different the models really are. You can improve a lot just by improving the post-training set.