I am no grammarian, but I feel like em-dashes are an easy way to tie together two different concepts without rewriting the entire sentence to flow more elegantly. (Not to say that em-dashes are inelegant, I like them a lot myself.)
And so AI models are prone to using them because they require less computation than rewriting a sentence.
This is sort of my thinking too. It's finding next token once the previous ones have been generated. Dashes are an efficient way to continue a thought once you've already written a nearly complete sentence, but it doesn't create a run-on sentence. They're efficient in the sense that they allow more future grammatically correct options even when you've committed to previous tokens.