logoalt Hacker News

I_am_tiberiustoday at 2:17 AM2 repliesview on HN

If it's in your power, make sure user prompts and llm responses are never read, never analyzed and never used for training - not anonymized, not derived, not at all.


Replies

surajrmaltoday at 2:49 AM

No single person other than Sam Altman can stop them from using anonymized interactions for training and metrics. At least in the consumer tiers.

satvikpendemtoday at 3:14 AM

It's a little too late for that, all the models train on prompts and responses.