You still get the same thing though?
That grumpy guy is using an LLM and debugging with it. Solves the problem. AI provider fine tunes their model with this. You now have his input baked into it's response.
How you think these things work? It's either a human direct input it's remembering or a RL enviroment made by a human to solve the problem you are working on.
Nothing in it is "made up" it's just a resolution problem which will only get better over time.
How does that work if there's no new data for them to train on, only AI slurry?