If it's in your power, make sure user prompts and llm responses are never read, never analyzed and never used for training - not anonymized, not derived, not at all.
No single person other than Sam Altman can stop them from using anonymized interactions for training and metrics. At least in the consumer tiers.
It's a little too late for that, all the models train on prompts and responses.
No single person other than Sam Altman can stop them from using anonymized interactions for training and metrics. At least in the consumer tiers.