Great article, nice to see some actual critical thoughts on the shortcomings of LLMs. They are wrong about programming being a "chess-like domain" though. Even at a basic level hidden state is future requirements, and the adversary is self or any other entity that has to modify the code in the future.
AI is good at producing code for scenarios where the stakes are low, there's no expectation about future requirements, or if the thing is so well defined there is a clear best path of implementation.
(Author here)
I address that in part right there itself. Programming has parts like chess (ie bounded) which is what people assume to be actual work. Understanding future requiremnts / stakeholder incentives is part of the work which LLMs dont do well.
> many domains are chess-like in their technical core but become poker-like in their operational context.
This applies to programming too.