No, I am making the argument that models have poor capabilities outside of tasks they are RLed for, and their capabilities inside those tasks are only as good as capabilities of people evaluating their responses, i.e. not great. Even if you instruct the model "don't do X" or "do X this way"—you cannot rely on the model following that instruction. This means that there is nothing you can do if model makes "errors."
Not necessarily relevant, but fun, I had the ChatGPT model correct itself mid-response when checking my math work. It started by saying that I was wrong, then it proceeded to solve the problem and at the end it realized that I was correct.
> Even if you instruct the model "don't do X" or "do X this way"—you cannot rely on the model following that instruction.
Why not? I can definitively fire of two prompts to the same model and harness, and one include "don't do X" and the other doesn't, and I get what I expect, one didn't try to avoid doing X, and the other did. Is that not your experience using LLMs?