IMO LLMs are forcing us in the other way.
To get the maximum ROI from LLM-assisted programming it needs proper unit tests, integration tests, correctly configured linters, accessible documentation and well-managed git history (Claude actually checks git history nowadays to see when a feature was added if it has a bug)
Worst case we'll still have proper tests and documentation if the AI bubble suddenly bursts. Best case we can skip the boring bits because the LLM is "smart" enough to handle the low hanging fruit reliably because of the robust test suite.