Seeing as much of the discussion here is about LLMs, not just the shortcomings of natural language as a programming language, another LLM-specific aspect is how the LLM is interpreting the natural language instructions it is being given...
One might naively think that the "AI" (LLM) is going to apply it's intelligence to give you the "best" code in response to your request, and in a way it is, but this is "LLM best" not "human best" - the LLM is trying "guess what's expected" (i.e. minimize prediction error), not give you the best quality code/design per your request. This is similar to having an LLM play chess - it is not trying to play what it thinks is the strongest move, but rather trying to predict a continuation of the game, given the context, which will be a poor move if it thinks the context indicates a poor player.
With an RL-trained reasoning model, the LLM's behavior is slightly longer horizon - not just minimizing next token prediction errors, but also steering the output in a direction intended to match the type of reasoning seen during RL training. Again, this isn't the same as a human, applying their experience to achieve (predict!) a goal, but arguably more like cargo-cult reasoning - following observed patterns of reasoning in the training set, without the depth of understanding and intelligence to know if this is really applicable in the current context, nor with the ability to learn from it's mistakes when it is not.
So, while natural language itself is of course too vague to program in, which is part of the reason that we use programming languages instead, it's totally adequate as a way to communicate requirements/etc to an expert human developer/analyst, but when communicating to an LLM instead of a person, one should expect the LLM to behave as an LLM, not as a human. It's a paperclip maximizer, not a human-level intelligence.