...that assumes LLMs will contribute garbage code in the first place. Will they, though?
The LLM isn’t contributing garbage, the user is by (likely) not testing/verifying it meets all requirements. I haven’t yet used an LLM which didn’t require some handholding to get to a good code contribution on projects with any complexity.
I heard a quote recently that I really like.
"AI sucks at most peoples' jobs. If you think AI is good at something, chances are you suck at it too."
The problem isn't that it can't write good code. It's that the guy prompting it often doesn't know enough to tell the difference. Way too many vibe coders these days who can generate a PR in 5 seconds, but can’t explain a single line of it.