> We, agentic coders, can easily enough fork their project
And this is why eventually you are likely to run the artisanal coders who tend to do most of the true innovation out of the room.
Because by and large, agentic coders don't contribute, they make their own fork which nobody else is interested in because it is personalized to them and the code quality is questionable at best.
Eventually, I'm sure LLM code quality will catch up, but the ease with which an existing codebase can be forked and slightly tuned, instead of contributing to the original, is a double edged sword.
"make their own fork which nobody else is interested in because it is personalized to them"
Isn't that literally how open-source works, and why there's so many Linux distros?
Code quality is a subjective term as well, I feel like everyone dunking on AI coding is a defensive reaction - over time this will become an entirely acceptable concept.
Maybe! Or maybe there is really a competitive advantage to "artisanal" coding.
Personally, I would not currently expect a fork of RedoxOS that is AI-implemented to become more popular than RedoxOS itself.
I mean, I do open PRs for most of my changes upstream if they allow AI, once I've been using the feature for a few weeks and have fixed the bugs and gone over the code a few times to make sure it's good quality. Also, I'm going to be using the damn thing, I don't want it to be constantly broken either, and I don't want the code to get hacky and thus incompatible with upstream or cause the LLMs to drift, so I usually spend a good amount of time making sure the code is high quality — integrates with the existing architecture and model of the world in the code, follows best practices, covers edge cases, has tests, is easy to read so that I can review it easily.
But if a project bans AI then yeah, they'll be run out of town because I won't bother trying to contribute.