"Prompt Repetition Improves Non-Reasoning LLMs " - https://arxiv.org/pdf/2512.14982
What instance of ChatGPT are you doing that with? (Reasoning?)
Observed from 5.2, on chatgpt.com. earlier versions did worse.. as in, they might take a few prompts to generate a parseable syntax. Newer versions just usually deliver one unparseable version then get it right second try. Likely I could prompt engineer to one shot but I think I would always need the specific warning about newlines.
I don't think it's about repeating the instructions, but rather providing feedback as to why it's not working.
I've noticed the same thing when creating an agentic loop, if the model outputs a syntax error, just automatically feed it back to the LLM and give it a second chance. It dramatically increases the success rate.