Why is image generation the same as code generation?
It isn't.
Code generation progression in LLMs still carries higher objective risk of failure depending on the experience on the person using it because:
1. They still do not trust if the code works (even if it has tests) thus, needs thorough human supervision and still requires on-going maintainance.
2. Hence (2) it can cost you more money than the tokens you spent building it in the first place when it goes horribly wrong in production.
Image generation progression comes with close to no operational impact, and has far less human supervision and can be safely done with none.
it's not. We were able to get rid of 6 fingered hands by getting very specific, and fine tuning models with lots of hand and finger training data.
But that approach doesn't work with code, or with reasoning in general, because you would need to exponentially fine tune everything in the universe. The illusion that the AI "understands" what it is doing is lost.