1. You can already do that it just costs more than $10.
2. Even assuming the AI can crap out the entire feature unassisted, in a large open source code base the maintainer is gonna to spend a sizeable fraction of the time reviewing and testing the feature as they would have coding it. You’re now back to 1.
Conceivably it might make it a little cheaper, but not anywhere close to the kind of money you’re talking about.
Now if agents do get so good that no human review is required, you wouldn’t bother with the library in the first place.
Yea, that’s the ideologically not aligned part I referenced.
If AI can make features without humans why would I, as a profit maximizing organization, donate that resource instead of keeping it in house? If we’re not gonna have human eyes on it then we’re not getting more secure, I don’t really think positive PR would exist for that, and it would deny competitors resources you now have that they don’t.
> Now if agents do get so good that no human review is required, you wouldn’t bother with the library in the first place.
The comment you responded to is (presumably) talking about the transition phase where LLMs can help implement but not fully deliver a feature and need human oversight.
If there are reasonably good devs in low CoL areas who can coax a new feature or bug fix for an open source project out of an LLM for $50, i think it’s worth trialling as a business model.