Of course, LLMs have training and inference stages clearly split out. So I don’t think prompts are immediately integrated into the model. And, it would be pretty weird if there was some sort of shared context where that all the prompts got put into, because it would grow to some absurdly massive size.
But, I also expect that eventually every prompt is going to be a candidate for being added into the training set, for some future version of the model (when using a hosted, proprietary model that just sends your prompts off to some company’s servers, that is).