I can totally wrap my head around that, and it's an interesting thought experiment, though:
- building functionalities as components that are swappable on a whim requires a level of careful thought, abstraction and architecture that essentially is the exact opposite to ai slop
- in this day and age we still don't make software for the sake of it, and who's financing it doesn't generally require such levels of functional flexibility (the physical world commandeering the coding isn't nearly as volatile as to justify that)
- this comes loaded with the implication that "stuff needs to work": if you are developing software that manages inventory, orders, resources, ... you just can't take the chance to corrupt your customers data or disrupt their business processes. Shipping faster than you can test and with no accountability and no oversight is a solution to a problem I've personally never encountered in the wild
>- building functionalities as components that are swappable on a whim requires a level of careful thought, abstraction and architecture that essentially is the exact opposite to ai slop
that is only for humans really. Why we need these careful thought, abstraction and architecture? Because otherwise the required code becomes an unmanageable pile of spaghetti handling myriad of edge cases of abstraction leaks and unexpected side effects. Human brain can't manage it. AI can or at least soon would be able to. It will just be a large pile of AI slop.
It may also happen that AI will also start generate good component based architecture if forced to minimize or in some other measurable way improve its slop.