In a sense they do use their own language; they program in tokenized source, not ASCII source. And maybe that's just a form of syntactic sugar, like replacing >= with ≥ but x100. Or... maybe it's more than that? The tokenization and the models coevolve, from my understanding.
If we do enough passes of synthetic or goal-based training of source code generation, where the models are trained to successfully implement things instead of imitating success, then we may see new programming paradigms emerge that were not present in any training data. The "new language" would probably not be a programming language (because we train on generating source FOR a language, not giving it the freedom to generate languages), but could be new patterns within languages.