Not sure how they can expect to make a viable full OS without massive use of LLMs, so this makes no sense.
What makes sense if that of course any LLM-generated code must be reviewed by a good programmer and must be correct and well written, and the AI usage must be precisely disclosed.
What they should ban is people posting AI-generated code without mentioning it or replying "I don't know, the AI did it like that" to questions.
>Not sure how they can expect to make a viable full OS without massive use of LLMs, so this makes no sense.
Why not?
> Not sure how they can expect to make a viable full OS without massive use of LLMs, so this makes no sense.
Every single production OS, including the one you use right now, was made before LLMs even existed.
> What makes sense if that of course any LLM-generated code must be reviewed by a good programmer
The time of good programmers, especially ones working for free in their spare time on OSS projects, is a limited resource.
The ability to generate slop using LLMs, is effectively unlimited.
This discrepancy can only be resolved in one way: https://itsfoss.com/news/curl-ai-slop/
what a retarded view. All OSes you use today were developed without AI
they already have...
The problem is the increasing review burden - with LLMs it is possible to create superficially valid looking (but potentially incorrect) code without much effort, which will still take a lot of effort to review. So outright rejecting code that can identified as LLM-generated at a glance, is a rough filter to remove the lowest effort PRs.
Over time this might not be enough, though, so I suspect we will see default deny policies popping up soon enough.