Lots of research shows post-training dumbs down the models but no one listens because people are too lazy to learn proper prompt programming and would rather have a model already understand the concept of a conversation.
Some distributional collapse is good in terms of making these things reliable tools. The creativity and divergent thinking does take a hit, but humans are better at this anyhow so I view it as a net W.
How do you take a raw model and use it without chatting ? Asking as a layman
"Post-training" is too much of a conflation, because there are many post-training methods and each of them has its own quirky failure modes.
That being said? RLHF on user feedback data is model poison.
Users are NOT reliable model evaluators, and user feedback data should be treated with the same level of precaution you would treat radioactive waste.
Professional are not very reliable either, but the users are so much worse.