Re vendor lock in point: this is a harness issue really. Sure, CC is restricted to Anthropic models, but it's not the only harness out there. So if one vendor has an outage or botches the quality of their models due to compute shortage, you can switch to another vendor. LLMs are the easiest to switch. Of course, if hardware costs go up, so will all AI vendors. The only way out for the employer would be to directly buy the hardware (or do a fixed price deal with a cloud provider).
Re the understanding code point: you can still use LLMs to understand code. If you write the spec without knowing anything about the code, of course the architecture might suck. Maybe there is already a subsystem that you can modify and extend instead of adding a completely new one for the new feature you are adding, etc.
I use LLMs for my daily workflows and they do understand code perfectly and much more quickly than if I read it.
I’ve build a configuration transpiler to Claude code and codex and found I can switch pretty quickly between both and run both at once. At the moment codex performs better. Prior CC did. There is no vendor lockin and this is an old canard in technology that LLMs in fact themselves make irrelevant. Once you’ve got an implementation that uses X converting it to Y is almost trivial with an LLM because the spec is canonical in the reference.
They are also surprising good at finding bugs that humans often miss
CC isn’t even limited to Anthropic models, there’s a post on the front page right now to use it with Deepseek V4 since Deepseek provides an Anthropic compatible API and CC reads API URLs from env variables so you can override them.