logoalt Hacker News

lifistoday at 9:27 AM5 repliesview on HN

Not sure how they can expect to make a viable full OS without massive use of LLMs, so this makes no sense.

What makes sense if that of course any LLM-generated code must be reviewed by a good programmer and must be correct and well written, and the AI usage must be precisely disclosed.

What they should ban is people posting AI-generated code without mentioning it or replying "I don't know, the AI did it like that" to questions.


Replies

ptnpzwqdtoday at 9:39 AM

The problem is the increasing review burden - with LLMs it is possible to create superficially valid looking (but potentially incorrect) code without much effort, which will still take a lot of effort to review. So outright rejecting code that can identified as LLM-generated at a glance, is a rough filter to remove the lowest effort PRs.

Over time this might not be enough, though, so I suspect we will see default deny policies popping up soon enough.

duskdozertoday at 9:28 AM

>Not sure how they can expect to make a viable full OS without massive use of LLMs, so this makes no sense.

Why not?

show 1 reply
usrbinbashtoday at 9:34 AM

> Not sure how they can expect to make a viable full OS without massive use of LLMs, so this makes no sense.

Every single production OS, including the one you use right now, was made before LLMs even existed.

> What makes sense if that of course any LLM-generated code must be reviewed by a good programmer

The time of good programmers, especially ones working for free in their spare time on OSS projects, is a limited resource.

The ability to generate slop using LLMs, is effectively unlimited.

This discrepancy can only be resolved in one way: https://itsfoss.com/news/curl-ai-slop/

show 1 reply
sh4zb0ttoday at 10:14 AM

what a retarded view. All OSes you use today were developed without AI

dagi3dtoday at 9:34 AM

they already have...