logoalt Hacker News

The Zig project's rationale for their anti-AI contribution policy

330 pointsby lumpatoday at 2:15 AM159 commentsview on HN

Comments

branko_dtoday at 7:39 AM

From https://kristoff.it/blog/contributor-poker-and-ai/:

"Unfortunately the reality of LLM-based contributions has been mostly negative for us, from an increase in background noise due to worthless drive-by PRs full of hallucinations (that wouldn’t even compile, let alone pass CI), to insane 10 thousand line long first time PRs. In-between we also received plenty of PRs that looked fine on the surface, some of which explicitly claimed to not have made use of LLMs, but where follow-up discussions immediately made it clear that the author was sneakily consulting an LLM and regurgitating its mistake-filled replies to us."

show 3 replies
bvrmntoday at 11:17 AM

The funny thing LLM's are amazingly good with writing in Zig. They could inspect stdlib source code to fix compatibility issues with newer compilers and quite prolific with idioms.

For example I got a working application with minimal prompt like "I need an X11 tray icon app showing battery charge level". BTW result: https://github.com/baverman/battray/

Now I'm trying to implement a full taskbar to replace bmpanel2. Results are very positive. I've got feature parity app in 1h with solid zig code.

show 1 reply
future_crew_fantoday at 11:18 AM

Rule should be anti-fully-autonomous-PRs. (LLMs dont push bad code. People use LLMs to push bad code and DDoS the maintainers mental bandwidth)

show 1 reply
hitekkertoday at 4:45 AM

Apparently, the noise around the AI policy came from Bun's developers saying that policy blocks upstreaming their performance PR. But the real reason seems to be that PR's code itself isn't in great shape, and introduces unhealthy complexity https://ziggit.dev/t/bun-s-zig-fork-got-4x-faster-compilatio...

> Parallel semantic analysis has been an explicitly planned feature of the Zig compiler for a long time, and it has heavily influenced the design of the self-hosted Zig compiler. However, implementing this feature correctly has implications not only for the compiler implementation, but for the Zig language itself! Therefore, to implement this feature without an avalanche of bugs and inconsistencies, we need to make language changes.

show 2 replies
lccerinatoday at 7:59 AM

It seems that Zig people are following the path of ZeroMQ [1]: "To enforce collective ownership of the project, which increases economic incentive to Contributors and reduces the risk of hijack by hostile entities."

A healthy contributor community is more important than mere code performance, quantity of features or lines of code, etc..

[1] https://zguide.zeromq.org/docs/chapter6

show 1 reply
grokystoday at 10:37 AM

My issue with AI-generated OSS contributions is:

If an AI improves developer productivity so much, why would maintainers of an OSS project want unknown contributors to sit in between the maintainer and the LLM? They'd be typing these queries into Claude Code themselves. To quote my colleague:

> We do not need a middleman to talk to AI models. We are not bottlenecked by coding.

show 1 reply
jamesontoday at 10:20 AM

LLMs are not smart as the LLM vendors claimed to be.

If they are, we wouldn't be having this conversation because they will be fully autonomous

People who blindly submits LLM generated code or do not cite its usage really need to stop doing it

jarttoday at 4:02 AM

> This makes a lot of sense to me. It relates to an idea I've seen circulating elsewhere: if a PR was mostly written by an LLM, why should a project maintainer spend time reviewing and discussing that PR as opposed to firing up their own LLM to solve the same problem?

The same argument applies to open source itself. Why use someone's project when you can just have the robot write your own? It's especially true if the open source project was vibe coded. AI and technology in general makes personalization cheap and affordable. Whereas earlier you had to use something that was mass produced to be satisfactory for everyone, now you have the hope of getting something that's outstanding for just you. It also stimulates the labor economy, because you have lots of people everywhere reinventing open source projects with their LLMs.

show 10 replies
julenxtoday at 6:26 AM

The article explains Zig's stance in further detail, but the quoted part on its own caught my attention because my reading of it is rather "pro human communication" instead of "anti-AI".

baqtoday at 6:59 AM

> why should a project maintainer spend time reviewing and discussing that PR as opposed to firing up their own LLM to solve the same problem?

perhaps that's what the maintainers should be doing after all. it still takes time and tokens, though; neither is free.

I'd personally rather have the maintainers spend the time writing as much docs and specs as possible so the future LLMs have strong guardrails. zig's policy will be completely outdated in a couple years, for better or worse. someone will take bun's fork, add a codegen improvement here, add a linker improvement there and suddenly you'll have a better, faster zig outside of zig.

gorgoilertoday at 9:52 AM

Presumably this only applies to newcomers? The thrust of their policy is to nurture new contributors. Once one has established oneself as a meaningful contributor — which the Bun team surely must have done by now — then it doesn’t matter where the code came from.

…in theory. In reality, I’m sure a policy like this can’t be selective and fair at the same time. Pick one!

KronisLVtoday at 7:07 AM

> If a PR was mostly written by an LLM, why should a project maintainer spend time reviewing and discussing that PR as opposed to firing up their own LLM to solve the same problem?

That's a fair thing to ask, though it seems like people will arrive at very different conclusions there.

felipeeriastoday at 5:12 AM

The other side of this is that open source projects that allow AI tools will be more restrictive towards new contributors.

This already happens to some degree on large software projects with corporate backing (Web engines, compilers, etc.), where it is often not trivial to start contributing as an independent individual.

Reasonable people can disagree on whether one approach is inherently better than the other, as ultimately they seem to be optimising for different goals.

show 3 replies
mikmoilatoday at 9:53 AM

How about intellectual-property risks?

buggymcbugfixtoday at 4:50 AM

One reason I love writing production code in Ur/Web is that LLMs are incapable of synthesising something even remotely resembling it. Keeps me on my toes.

I think this is a great policy by the Zig team.

show 1 reply
SuperV1234today at 10:06 AM

Perhaps if the Zig maintainers had an LLM review their terrible rationale they would have picked up on the fact that it logically makes no sense.

show 1 reply
trklaussstoday at 7:30 AM

Honestly, that doesn't sound too bad. It does not say you can't use LLMs, it just doesn't let LLMs be the author of a commit. Meaning, if you as a developer make yourself responsible for what the LLM wrote, go ahead. But be ready to answer the technical questions, be ready to get grilled in the code review, and be called if you get a CVE on that part of the code...

shirrotoday at 6:40 AM

People shouldn't have to justify not putting up with bullshit. It is a sensible default.

feverzsjtoday at 4:14 AM

No human should trust any bullshit made by bullshit machine.

slopinthebagtoday at 6:08 AM

Go zig! I don't use the language but I totally respect where they're coming from and their mission and ethics.

For those who are pissed because a large OSS project isn't accepting LLM generated slop: Fuck off!

slopinthebagtoday at 6:07 AM

Very convenient of Mr. Willison to omit the fact that Bun's upstream changes are total garbage and would not be upstreamed regardless of any policies, omitting LLM generated code or not, since they are, as a zig core team member articulated in a classier way, shite.

show 1 reply
techpulselabtoday at 8:06 AM

[flagged]

njannetoday at 8:06 AM

[dead]

marlburrowtoday at 6:06 AM

[flagged]

jwzxgotoday at 2:19 AM

[dead]

show 1 reply
GaryBlutotoday at 7:37 AM

I don't think I've ever heard anything positive about Zig. Every time I've seen the project mentioned is them using bizarre black and white moral judgements to justify stupid decisions.

show 2 replies
lukaslalinskytoday at 7:45 AM

On multiple occasions over the last months, I have been wishing the Zig/ZSF team would use LLMs. I've found many copy&paste errors that simply wouldn't exist if mundane tasks were delegated to a good LLM. It's even in the Zig community, I've seen PRs to some projects I'm interested in boosting how it was all human made, and containing all kinds of trivial logical errors that even the worst LLM would catch.

show 2 replies
jillesvangurptoday at 5:58 AM

It's a good rationale. But it points the finger at a real bottleneck in open source development: the burden of manually reviewing contributions. And the need to automate that with AI as well. Reviews were already becoming a problem before AI. Lots of projects have been dealing with a large influx of contributions from inexperienced developers from all over the world looking to boost their CVs by increasing their Github statistics. It's the same dynamic that destroyed Stackoverflow. Which, thanks to AI has been largely sidelined now. And now that AI is there, those same inexperienced developers are using that at scale to generate even more garbage contributions.

Doing manual reviews of everything is very labor intensive and not scalable. However, AIs are pretty good at doing code reviews and verifying adherence to guard rails, contributor guidelines, and other rules. It's not perfect, but it's an underused tool. Both by reviewers and contributors. If your contribution obviously doesn't comply with the guidelines, it should be rejected automatically. The word "obviously" here translates into "easy to detect with some AI system".

Projects should be using a lot of scrutiny for contributions by new contributors. And most of that scrutiny should be automated. They should reserve their attention for things that make it past automated checks for contribution quality, contributor reputability, adherence to whatever rules are in place, etc. Reputability is a good way to ensure that contributions from reputable sources get priority. If your reputation is not great, you should expect more scrutiny and a lower priority.

show 3 replies
CaptainFevertoday at 7:21 AM

> LLM assistance breaks that completely. It doesn't matter if the LLM helps you submit a perfect PR to Zig - the time the Zig team spends reviewing your work does nothing to help them add new, confident, trustworthy contributors to their overall project.

The conclusion does not follow from the premises. They are assuming fully autonomous agents submitting PRs, not using LLMs as tools like Bun did. As always, the most ethical thing to do is to just ignore any anti-LLM policies and not disclose anything.

show 2 replies