No, LLMs really are not more sticky than traditional APIs. Normal APIs are unforgiving in their inputs and rigid in their outputs. No matter how hard you try, Hyrum's Law will get you over and over again. Every migration is an exercise in pain. LLMs are the ultimate adapting, malleable tool. It doesn't matter if you'd carefully tuned your prompt against a specific six months old model. The new model of today is sufficiently smarter that it'll do a better job despite not having been tuned on those specific prompts.
This isn't even theory, we can observe the swings in practice on Openrouter.
If the value was in prompt engineering, people would stick to specific old versions of models, because a new version of a given model might as well be a totally different model. It will behave differently, and will need to be qualified again. But of course only few people stick with the obsolete models. How many applications do you think still use a model released a year ago?
A Full migration is not always required these days.
It is possible to write adapters to API interfaces. Many proprietary APIs become de-facto standards when competitors start creating those compatibility layers out of the box to convince you it is a drop-in replacement. S3 APIs are good example Every major (and most minor) providers with the glaring exception of Azure support the S3 APIs out of the box now. psql wire protocol is another similar example, so many databases support it these days.
In the LLM inference world OpenAI API specs are becoming that kind of defacto standard.
There are always caveats of course, and switches go rarely without bumps. It depends on what you are using, only few popular widely/fully supported features or something niche feature in the API that is likely not properly implemented by some provider etc, you will get some bugs.
In most cases bugs in the API interface world is relatively easy to solve as they can be replicated and logged as exceptions.
In the LLM world there are few "right" answers on inference outputs, so it lot harder to catch and replicate bugs which can be fixed without breaking something else. You end up retuning all your workflows for the new model.