logoalt Hacker News

Redox OS has adopted a Certificate of Origin policy and a strict no-LLM policy

171 pointsby pjmlptoday at 8:54 AM139 commentsview on HN

Comments

ptnpzwqdtoday at 9:13 AM

I think this is a reasonable decision (although maybe increasingly insufficient).

It doesn't really matter what your stance on AI is, the problem is the increased review burden on OSS maintainers.

In the past, the code itself was a sort of proof of effort - you would need to invest some time and effort on your PRs, otherwise they would be easily dismissed at a glance. That is no longer the case, as LLMs can quickly generate PRs that might look superficially correct. Effort can still have been out into those PRs, but there is no way to tell without spending time reviewing in more detail.

Policies like this help decrease that review burden, by outright rejecting what can be identified as LLM-generated code at a glance. That is probably a fair bit today, but it might get harder over time, though, so I suspect eventually we will see a shift towards more trust-based models, where you cannot submit PRs if you haven't been approved in advance somehow.

Even if we assume LLMs would consistently generate good enough quality code, code submitted by someone untrusted would still need detailed review for many reasons - so even in that case it would like be faster for the maintainers to just use the tools themselves, rather than reviewing someone else's use of the same tools.

show 5 replies
jacquesmtoday at 11:51 AM

Hiring managers could help here: the only thing that should count as a positive when - if - you feel like someone's open source contributions are important for your hiring decision is to make it plain that you only accept this if someone is a core contributor. Drive-by contributions should not count for anything, even if accepted.

lukaslalinskytoday at 9:30 AM

I think we will be getting into an interesting situation soon, where project maintainers use LLMs because they truly are useful in many cases, but will ban contributors for doing so, because they can't review how well did the user guide the LLM.

show 7 replies
dev_l1x_betoday at 11:41 AM

In my experience with the right set of guardrails LLMs can deliver high quality code. One interesting aspect is doing security reviews and formal verification with agents that is proven to be very useful in practice.

https://www.datadoghq.com/blog/ai/harness-first-agents/

yla92today at 10:32 AM

Zig has a similar stance on no-LLM policy

https://codeberg.org/ziglang/zig#strict-no-llm-no-ai-policy

show 2 replies
qseratoday at 11:46 AM

I think clients who care about getting good software will eventually require that LLMs are not directly used during the development.

I think one way to compare the use of LLMs is that it is like comparing a dynamically typed language with a functional/statically typed one. Functional programming languages with static typing makes it harder to implement the solution without understanding and developing an intuition of the problem.

But programming languages with dynamic typing will let you create a (partial) solutions with a lesser understanding the problem.

LLMs takes it even more easy to implement an even more partial solutions, without actually understanding even less of the problem (actually zero understanding is required)..

throwaway2037today at 9:21 AM

    > any content submitted that is clearly labelled as LLM-generated (including issues, merge requests, and merge request descriptions) will be immediately closed
Note the word "clearly". Weirdly, as a native English speaker this term makes the policy less strict. What about submarine LLM submissions?

I have no beef with Redox OS. I wish them well. This feels like the newest form of OSS virtue signaling.

show 5 replies
khalictoday at 9:11 AM

The LLM ban is unenforceable, they must know this. Is it to scare off the most obvious stuff and have a way to kick people off easily in case of incomplete evidence?

show 6 replies
BirAdamtoday at 11:28 AM

So... my prediction is that they will either have to close off their dev process or start using LLMs to filter contributions in the attempt to detect submissions from LLMs.

tkeltoday at 9:12 AM

Glad to see they are applying some rigor. I've started removing AI-heavy projects from my dependency tree.

hparadiztoday at 9:33 AM

I am 100% certain that code that Redox OS relies on in upstream already has LLM code in it.

show 1 reply
stuaxotoday at 9:18 AM

We need LLMs that have a certificate of origin.

For instance a GPL LLM trained only on GPL code where the source data is all known, and the output is all GPL.

It could be done with a distributed effort.

show 4 replies
cardanometoday at 10:47 AM

I am wondering why people spam OSS with AI slop pull requests in the first place?

Are they really that delusional to think that their AI slop has any value to the project?

Do they think acting like a complete prick and increasing the burden for the maintainers will get them a job offer?

I guess interacting with a sycophantic LLM for hours truly rots the brain.

To spell it out: No, your AI generated code has zero value. Actually less than that because generating it helped destroy the environment.

If the problem could be solved by using an LLM and the maintainers wanted to, they could prompt it themselves and get much better results than you do because they actually know the code. And no AI will not help you "get into open source". You don't learn shit from spamming open source projects.

show 2 replies
aleph_minus_onetoday at 9:58 AM

While I am more on the AI-hater side, I don't consider this to be a good idea:

"any content submitted that is clearly labelled as LLM-generated (including issues, merge requests, and merge request descriptions) will be immediately closed"

For example:

- What if a non-native English speaker uses the help of an AI model in the formulation of some issue/task?

- What about having a plugin in your IDE that rather gives syntax and small code fragment suggestions ("autocomplete on steroids")? Does this policy mean that the programmers are also restricted on the IDE and plugins that they are allowed to have installed if they want to contribute?

show 2 replies
hagen8today at 10:03 AM

They will sooner or later change that policy or get very slow in keeping up.

The-Ludwigtoday at 9:12 AM

Hm, wondering how to enforce this rule. Rules without any means to enforce them can put the honest people into a disadvantage.

show 1 reply
algoth1today at 10:12 AM

What would constitute "clearly llm generated" though

show 1 reply
dana321today at 11:15 AM

Generating small chunks of code with llms to save time works well, as long as you can read and understand the code i don't see what the problem is.

apitoday at 10:44 AM

AI has the potential to level the playing field somewhat between open source and commercial software and SaaS that can afford armies of expensive paid developers.

Time consuming work can be done quickly at a fraction of the cost or even almost free with open weights LLMs.

scotty79today at 10:27 AM

I see a lot of oss forks in the future where people just fork to fix their issues with LLMs without going through maintainers. Or even doing full LLM rewrites of smaller stuff.

estsauvertoday at 9:10 AM

They're certainly welcome to do whatever they're like, and for a microkernel based OS it might make sense--I think there's probably pretty "Meh" output from a lot of LLMs.

I think part of the battle is actually just getting people to identify which LLM made it to understand if someones contribution is good or not. A javascript project with contributions from Opus 4.6 will probably be pretty good, but if someone is using Mistral small via the chat app, it's probably just a waste of time.

flanked-evergltoday at 10:38 AM

Spiritually Amish

emperorxanutoday at 9:15 AM

[flagged]

menaerustoday at 10:29 AM

Let someone from the Redox team go read [1], [2], and [3]. If they still insist on keeping their position then ... well. The industry is being redefined as we speak and everyone doing the push-back are pushing against themselves really.

[1] https://www.datadoghq.com/blog/ai/harness-first-agents/

[2] https://www.datadoghq.com/blog/ai/fully-autonomous-optimizat...

[3] https://www.datadoghq.com/blog/engineering/self-optimizing-s...

P.S. I know this will be downvoted to death but I'll leave it here anyway for folks who want to keep their eyes wide open.

show 2 replies
baqtoday at 9:27 AM

While I appreciate the morality and ethics of this choice, the current trend means projects going in this direction are making themselves irrelevant (don't bother quipping at how relevant redox is today, thanks). E.g. top security researches are now using LLMs to find new RCEs and local privilege escalations; no reason why the models couldn't fix these, too - and it's only the security surface.

IOW I think this stance is ethically good, but technically irresponsible.

show 2 replies
lifistoday at 9:27 AM

Not sure how they can expect to make a viable full OS without massive use of LLMs, so this makes no sense.

What makes sense if that of course any LLM-generated code must be reviewed by a good programmer and must be correct and well written, and the AI usage must be precisely disclosed.

What they should ban is people posting AI-generated code without mentioning it or replying "I don't know, the AI did it like that" to questions.

show 5 replies