> Assuming the LLM in use has, or is reasonably expected to have, model evolution, documents generated by same will diverge unpredictably given a constant prompt.
So what?
You tell it once. It writes code.
You test that code, not the prompt.