given that LLMs are going to produce code that is essentially an average of the code it has been trained on, which is all human code of varying quality, I don't see how the current methods are going to actually produce better code than humans do when working with their own domain-specific knowledge.