I forgot to mention why I brought up the idea of who is making the contribution rather than how (i.e., through an LLM).
Right now, the biggest issue open-source maintainers are facing is an ever-increasing supply of PRs. Before coding assistants, those PRs didn't get pushed not because they were never written (although obviously there were fewer in quantity) but because contributors were conscious of how their contributions might be perceived. In many cases, the changes never saw the light of day outside of the fork.
LLMs don't second-guess whether a change is worth submitting, and they certainly don't feel the social pressure of how their contribution might be received. The filter is completely absent.
So I don't think the question is whether machine-generated code is low quality at all, because that is hard to judge, and frankly coding assistants can certainly produce high-quality code (with guidance). The question is who made the contribution. With rising volumes, we will see an increasing amount of rejections.
By the way, we do this too internally. We have a script that deletes LLM-generated PRs automatically after some time. It is just easier and more cost-effective than reviewing the contribution. Also, PRs get rejected for the smallest of reasons.
If it doesn't pass the smell test moments after the link is opened, it get's deleted.
> LLMs don't second-guess whether a change is worth submitting, and they certainly don't feel the social pressure of how their contribution might be received. The filter is completely absent.
Of course you could have an agent on your side do this, so I take you to mean a LLM that submits a PR and is not instructed to make such a reflection will not intrinsically make it as a human would, that is as a necessary side effect of submitting in the first place (though one might be surprised).
It would be curious to have an API that perhaps attempts to validate some attestation about how the submitting LLM's contribution was derived, ie force that reflection at submission time with some reasonable guarantees of veracity even if it had yet to be considered. Perhaps some future API can enforce such a contract among the various LLMs.