I am more annoyed by the anti-AI luddites filling the comments with low value complaints than I am by quality content written partially by an LLM.
Those low value complaints add nothing to the conversation, and the content didnt make it to the front page because it was bad. If the sole objection is "AI bad", keep it to yourself....its boring.
I think a steelman interpretation of the parent is that entirely LLM-generated projects should be disallowed. There's a lot of submissions on Show HN that seem completely vibe-coded to me (like, including the README), which is a very different situation IMO from someone who simply used Claude to write some—or even most—of the code. When even the human-facing portion of a submission is LLM-generated, it bothers a lot of people (myself included).
> I am more annoyed by the anti-AI luddites filling the comments with low value complaints than I am by quality content written partially by an LLM.
Low value content is still content, written by a human being with a specific point. I would argue that LLM written content is even worse than that, because what value does it add when you or I can just ask the LLM itself for it? Its existence is solely that of regurgitation.
Without engaging in more ad hominem, that are wrong by the way, what's the issue with labeling AI content with what it is?
In every single article's comments now, there's always someone coming out of the woodwork to post "This article is written by LLM." These comments are about as useless as "The website's color scheme is annoying" and "The website breaks the [back button | scrollbar]." (which, by the way, are not allowed per the HN guidelines[1])
If anything should be banned, it's low-effort "This is AI" commentary. It adds absolute zero to the conversation.
1: https://news.ycombinator.com/newsguidelines.html
I'd argue that: whether or not the article (or reply) was written by AI is a tangential annoyance at this point.