The nice thing about a vaguely English like language is that your average LLM is going to do a better job of making sense of it. Because it can leverage its learnings from the entire training set, not just the code-specific portion of it.
The exact opposite is true.
Not for generating it, because the more it looks like prose the more the LLM's output will be influenced by all the prose it's ingested.