I feel like the difference is minimal, if not entirely dismissable. Code in this sense is just a representation of the same information as someone would write in an .md file. The resolution changes, and that's where both detail and context are lost.
I'm not against TDD or verification-first development, but I don't think writing that as code is the end-goal. I'll concede that there's millions of lines of tests that already exist, so we should be using those as a foundation while everything else catches up.
Say you describe your kitchen as “I want a kitchen” - where are the knives? Where’s the stove? Answer: you abdicated control over those details, so it’s wherever the stochastic parrot decided to put them, which may or may not be where they ended up last time you pulled your LLM generate-me-a-kitchen lever. And it may not be where you want.
Don’t like the layout? Let’s reroll! Back to the generative kitchen agent for a new one! ($$$)
The big labs will gladly let you reroll until you’re happy. But software - and kitchens - should not be generated in a casino.
A finished software product - like a working kitchen - is a fractal collection of tiny details. Keeping your finished software from falling apart under its own weight means upholding as many of those details as possible.
Like a good kitchen a few differences are all that stands between software that works and software that’s hell. In software the probability that an agent will get 100% of the details right is very very small.
Details matter.
Tests (and type-checkers, linters, formal specs, etc.) ground the model in reality: they show it that it got something wrong (without needing a human in the loop). It's empiricism, "nullius in verba"; the scientific approach, which lead to remarkable advances in a few hundred years; that over a thousand years of ungrounded philosophy couldn't achieve.