logoalt Hacker News

ting0today at 8:22 AM1 replyview on HN

You have a problem with generating synthetic data from test questions? Humans simulate experiences in their mind. What's the problem?


Replies

BoorishBearstoday at 8:37 AM

Models don't generalize as well as humans.

If a model was trained on <|begin_text|> <|end_text|> and you change the tokens passed to <|start_text|> <|end_text|>, it loses several 'IQ points' if it can even answer back at all anymore.

Synthetic data is fine. Synthetic data on very similar questions generated based on the description is typically fine. But once the shape of what you're training on gets too close to the actual holdout questions, you're getting an uplift that's not realistic for unseen tasks.