logoalt Hacker News

NitpickLawyeryesterday at 4:19 PM0 repliesview on HN

> gpt-5.2 did ~2x better than gpt-5.2-codex.. why?

Optimising a model for a certain task, via fine-tuning (aka post-training), can lead to loss of performance on other tasks. People want codex to "generate code" and "drive agents" and so on. So oAI fine-tuned for that.