> You must state the tool you used (e.g. Claude Code, Cursor, Amp)
Interesting requirement! Feels a bit like asking someone what IDE they used.
There shouldn't be that meaningful of a difference between the different tools/providers unless you'd consistently see a few underperform and would choose to ban those or something.
The other rules feel like they might discourage AI use due to more boilerplate needed (though I assume the people using AI might make the AI fill out some of it), though I can understand why a project might want to have those sorts of disclosures and control. That said, the rules themselves feel quite reasonable!
On a tangent: the origin of the problems with low-quality drive-by requests is github's social nature. That might have been great when GitHub started, but nowadays many use it as portfolio padding and/or social proof.
"This person contributed to a lot of projects" heuristic for "they're a good and passionate developer" means people will increasingly game this using low-quality submissions. This has been happening to the fire.
Of course, AI just added kerosene to the fire, but re-read the policy and omit AI and it still makes sense!
A long term fix for this is to remove the incentive. Paradoxically, AI might help here because this can so trivially be gamed that it's obvious it's not longer any kind of signal.
I can see this becoming a pretty generally accepted AI usage policy. Very balanced.
Covers most of the points I'm sure many of us have experienced here while developing with AI. Most importantly, AI generated code does not substitute human thinking, testing, and clean up/rewrite.
On that last point, whenever I've gotten Codex to generate a substantial feature, usually I've had to rewrite a lot of the code to make it more compact even if it is correct. Adding indirection where it does not make sense is a big issue I've noticed LLMs make.
> Bad AI drivers will be banned and ridiculed in public. You've been warned. We love to help junior developers learn and grow, but if you're interested in that then don't use AI, and we'll help you. I'm sorry that bad AI drivers have ruined this for you.
Finally an AI policy I can agree with :) jokes aside, it might sound a bit too agressive but it's also true that some people have really no shame into overloading you with AI generated shit. You need to protect your attention as much as you can, it's becoming the new currency.
I really like the phrase "bad AI drivers"...AI is a tool, and the stupid drive-by pull requests just mean you're being inconsiderate and unhelpful in your usage of the tool, similar to how "bad drivers" are a nightmare to encounter on a highway...so stop it or you'll end up on the dashcam subreddit of programming.
"Pull requests created by AI must have been fully verified with human use." should always be a bare minimum requirement.
See x thread for rationale: https://x.com/mitchellh/status/2014433315261124760?s=46&t=FU...
“ Ultimately, I want to see full session transcripts, but we don't have enough tool support for that broadly.”
I have a side project, git-prompt-story to attach Claude Vode session in GitHub git notes. Though it is not that simple to do automatic (e.g. i need to redact credentials).
> No AI-generated media is allowed (art, images, videos, audio, etc.). Text and code are the only acceptable AI-generated content, per the other rules in this policy.
I find this distinction between media and text/code so interesting. To me it sounds like they think "text and code" are free from the controversy surrounding AI-generated media.
But judging from how AI companies grabbed all the art, images, videos, and audio they could get their hands on to train their LLMs it's naive to think that they didn't do the same with text and code.
A well crafted policy that, I think, will be adopted by many OSS.
You'd need that kind of sharp rules to compete against unhinged (or drunken) AI drivers and that's unfortunate. But at the same time, letting people DoS maintainers' time at essential no cost is not an option either.
If you prefer not to use GitHub: https://gothub.lunar.icu/ghostty-org/ghostty/blob/main/AI_PO...
sounds reasonable to me. i've been wondering about encoding detailed AI disclosure in an SBOM.
on a related note: i wish we could agree on rebranding the current LLM-driven never-gonna-AGI generation of "AI" to something else… now i'm thinking of when i read the in-game lore definition for VI (Virtual Intelligence) back when i played mass effect 1 ;)
A factor that people have not considered is that the copyright status of AI generated text is not settled law and precedent or new law may retroactively change the copyright status of a whole project.
Maybe a bit unlikely, but still an issue no one is really considering.
There has been a single ruling (I think) that AI generated code is uncopyrightable. There has been at least one affirmative fair use ruling. Both of these are from the lower courts. I'm still of the opinion that generative AI is not fair use because its clearly substitutive.
A good PR using IA should be impossible to distinguish from a non-AI one.
I recently had to do a similar policy for my TUI feed reader, after getting some AI slop spammy PRs: https://github.com/CrociDB/bulletty?tab=contributing-ov-file...
The fact that some people will straight up lie after submitting you a PR with lots of _that type_ of comment in the middle of the code is baffling!
Ultimately what's happening here is AI is undermining trust in remote contributions, and in new code. If you don't know somebody personally, and know how they work, the trust barrier is getting higher. I personally am already ultra vigilant for any github repo that is not already well established, and am even concerned about existing projects' code quality into the future. Not against AI per se (which I use), but it's just going to get harder to fight the slop.
Another project simply paused external contributions entirely: https://news.ycombinator.com/item?id=46642012
Another idea is to simply promote the donation of AI credits instead of output tokens. It would be better to donate credits, not outputs, because people already working on the project would be better at prompting and steering AI outputs.
Honestly I don't care how people come with the code they create, but I hold them responsible for what they try to merge.
I work in a team of 5 great professionals, there hasn't been a single instance since Copilot launched in 2022 that anybody, in any single modification did not take full responsibility for what's been committed.
I know we all use it, to different extent and usage, but the quality of what's produced hasn't dipped a single bit, I'd even argue it has improved because LLMs can find answers easier in complex codebases. We started putting `_vendor` directories with our main external dependencies as git subtrees, and it's super useful to find information about those directly in their source code and tests.
It's really as simple. If your teammates are producing slop, that's a human and professional problem and these people should be fired. If you use the tool correctly, it can help you a lot finding information and connecting dots.
Any person with a brain can clearly see the huge benefit of these tools, but also the great danger of not reviewing their output line by line and forfeiting the constant work of resolving design tensions.
Of course, open source is a different beast. The people committing may not be professionals and have no real stakes so they get little to lose by producing slop whereas maintainers are already stretched in their time and attention.
with limited training data that llm generated code must be atrocious
TLDR don't be an asshole and produce good stuff. But I have the feeling that this is not the right direction for the future. Distrust the process: only trust the results.
Moreover this policy is strictly unenforceable because good AI use is indistinguishable from good manual coding. And sometimes even the reverse. I don't believe in coding policies where maintainers need to spot if AI is used or not. I believe in experienced maintainers that are able to tell if a change looks sensible or not.
The biggest surprise to me with all this low-quality contribution spam is how little shame people apparently have. I have a handful of open source contributions. All of them are for small-ish projects and the complexity of my contributions are in the same ball-park as what I work on day-to-day. And even though I am relatively confident in my competency as a developer, these contributions are probably the most thoroughly tested and reviewed pieces of code I have ever written. I just really, really don't want to bother someone with low quality "help" who graciously offers their time to work on open source stuff.
Other people apparently don't have this feeling at all. Maybe I shouldn't have been surprised by this, but I've definitely been caught off guard by it.