I never really used Codex (found it to slow) just 5.2, which I going to be an excellent model for my work. This looks like another step up.
This week, I'm all local though, playing with opencode and running qwen3 coder next on my little spark machine. With the way these local models are progressing, I might move all my llm work locally.
I think codex got much faster for smaller tasks in the last few months. Especially if you turn thinking down to medium.