I encounter this all the time with GenAI projects. The idea of stability and "frozen" just doesn't exist with hosted models IMO. You can't bet that the model you're using will have the exact behavior a year from now, hell maybe not even 3 months. The model providers seem to be constantly tweaking things behind the scenes, or sunsetting old models very rapidly. Its a constant struggle of re-evaluating the results and tweaking prompts to stay on the treadmill.
Good for consultants, maybe, horrible for businesses that want to mark things as "done" and move them to limited maintenance/care and feeding teams. You're going to be dedicating senior folks to the project indefinitely.
You're gonna have to own the model weights and there will be an entire series of providers dedicated to maintaining oldmodels.
This isn't a new problem. It's like if you built a business based on providing an interface to a google product 10 years ago and google deleted the product. The answer is you don't sell permanent access to something you don't own. Period.
This is a big motivation for running your own models locally. OpenAI's move to deprecate older models was an eye-opener to some but also typical behavior of the SaaS "we don't have any versions" style of deployment. [0] It will need to change for AI apps to go mainstream in many enterprises.
[0] https://simonwillison.net/2025/Aug/8/surprise-deprecation-of...