Ok but when the model is responding to you isn’t the text it’s generating also part of the context it’s using to generate the next token as it goes? Wouldn’t this just make the answers…dumb?