I don't know what you are arguing, or why. Please follow the thread in its full context. Specifically, the argument the article author is making is that moving to a higher level of abstraction also cost developers the benefit of understanding the internals. Ultimately, that ended up not mattering very much.
The OP pushed back on this, saying compilers are deterministic and LLMs are not, and that lack of determinism makes LLM output unverifiable. I said the latter is not true because you can perform verification using tests. You claimed tests are not verification because LLMs don't preserve the semantics.
I'm not sure why semantics matter. LLMs providing no guarantees regarding the preservation of semantics is not important because you can guarantee the behavior of the generated code using tests. In most domains, this is sufficient. You tell the LLM to write code that does X, Y and Z, and then verify X, Y and Z using a test. That's it.
no, writing tests to verify that the "compiled" code semantically matches the code in the source language is not a good thing. The guarantees that I'm talking about are different.
You write tests for your own logic, not to do the compiler's job.
I have no idea why you are so stuck on determinism. That has nothing to do with what i'm saying. Sure compilers can be nondeterministic with things such as register allocation, but that is totally transparent to the programmer. The compiled code will do exactly what the source code describes. The nondeterminism in llms does not apply just to those things. An llm's nondeterminism might mean it decides to encode different logic, instead of a different implementation that is logically equivalent.
We don't usually write steps to verify that the compiler decided to ignore our code and do its own thing. You have to do that with llms