you can easily verify a lot like:
- correct syntax
- passes lints
- type checking passes
- fast test suite passes
- full test suite passes
and every time it doesn't you feed it back into the LLM, automatically, in a loop, without your involvement.
The results are often -- sadly -- too good to not slowly start using AI.
I say sadly because IMHO the IT industry has gone somewhere very wrong due to growing too fast, moving too fast and getting so much money so that the companies spear heading them could just throw more people at it instead of fixing underlying issues. There is also a huge diverge between sience about development, programming, application composition etc. (not to be confused with since about idk. data-structures and fundamental algorithms) and what the industry uses, how it advances etc.
Now I think normally the industry would auto correct at some point, but I fear with LLMs we might get even further away from any fundamental improvements, as we find even more ways to still go along and continue the mess we have.
Worse performance of LLM coding is highly dependent on how much very similar languages are represented in it's dataset, so new languages with any breakthrough/huge improvements or similar will work less good with LLMs. If that trend continues that would lock us in with very mid solutions long term.