I'm not sure. If you see what they're doing with feedback already in code generation. The LLM makes a "hallucination", generates the wrong idea, then tests its code only to find out it doesn't compile. And goes on to change its idea, and try again.
A few minutes worth of “personal experience” doesn't really deserve the “personal experience” qualifier.
A few minutes worth of “personal experience” doesn't really deserve the “personal experience” qualifier.