LLMs have clearly accelerated development for the most skilled developers.
Particularly when the human acts as the router/architect.
However, I've found Claude Code and Co only really work well for bootstrapping projects.
If you largely accept their edits unchanged, your codebase will accrue massive technical debt over time and ultimately slow you down vs semi-automatic LLM use.
It will probably change once the approach to large scale design gets more formalized and structured.
We ultimately need optimized DSLs and aggressive use of stateless sub-modules/abstractions that can be implemented in isolation to minimize the amount of context required for any one LLM invocation.
Yes, AI will one shot crappy static sites. And you can vibe code up to some level of complexity before it falls apart or slows dramatically.
> We ultimately need optimized DSLs and aggressive use of stateless sub-modules/abstractions that can be implemented in isolation to minimize the amount of context required for any one LLM invocation
Wait till you find out about programming languages and libraries!
> It will probably change once the approach to large scale design gets more formalized and structured
This idea has played out many times over the course of programming history. Unfortunately, reality doesn’t mesh with our attempts to generalize.
Valknut is pretty good at forcing agents to build more maintainable codebases. It helps them dry out code, separate concerns cohesively and organize complexity. https://github.com/sibyllinesoft/valknut
> accrue massive technical debt
The primary difference between a programmer and an engineer.
>If you largely accept their edits unchanged, your codebase will accrue massive technical debt over time and ultimately slow you down vs semi-automatic LLM use.
Worse, as its planning the next change, it's reading all this bad code that it wrote before, but now that bad code is blessed input. It writes more of it, and instructions to use a better approach are outweighed by the "evidence".
Also, it's not tech debt: https://news.ycombinator.com/item?id=27990979#28010192
> We ultimately need optimized DSLs and aggressive use of stateless sub-modules/abstractions that can be implemented in isolation to minimize the amount of context required for any one LLM invocation.
Containment of state also happens to benefit human developers too, and keep complexity from exploding.