Like almost all of these articles, there's really nothing AI- or LLM-specific here at all. Modularization, microservices, monorepos etc have all been used in the past to help scale up software development for huge teams and complex systems.
The only new thing is that small teams using these new tools will run into problems that previously only affected much larger teams. The cadence is faster, sometimes a lot faster, but the architectural problems and solutions are the same.
It seems to me that existing good practices continue to work well. I haven't seen any radically new approaches to software design and development that only work with LLMs and wouldn't work without them. Are there any?
I've seen a few suggestions of using LLMs directly as the app logic, rather than using LLMs to write the code, but that doesn't seem scalable, at least not at current LLM prices, so I'd say it's unproven at best. And it's not really a new idea either; it's always been a classic startup trick to do some stuff manually until you have both the time and the necessity to automate it.
The "current LLM prices" part is doing a lot of work in that argument though. Prices dropped roughly 10x in the past year, and model routing helps too -- not every inference call in an agent loop actually needs a frontier model. Tool output parsing, formatting, simple next-step decisions can use something that costs 1/100th of Opus without quality loss.
The real shift isn't just that code gets generated faster, it's that people are starting to use LLMs as runtime components. And the cost curve for that use case is moving way faster than most people realize.