logoalt Hacker News

acheronyesterday at 4:44 PM3 repliesview on HN

I just flag all the AI complaints. Perfect example of the guideline about “don’t complain about tangential issues” or whatever the wording is.


Replies

mplanchardyesterday at 5:09 PM

This feels different to me than complaining about the font or whatever. I don’t want to read or comment on anything not written by a human. I also agree with GP here that using AI instead of your own words has bearing on the content itself, insofar as it’s a signal that the author doesn’t care enough to write it themself.

As a corollary, I also want to know if a project posted here is predominantly vibe-coded, since that to me is a signal that it may be of lower quality, have fewer edge cases worked out, and is more likely to be abandoned in the near future.

arduanikayesterday at 5:41 PM

Caring enough to put in the effort of thinking and writing is not a "tangential issue". Laziness is a substantive defect, and sadly, I think that kelseyfrog has clocked this one correctly. There are borderline cases, but the cadence of this tweet thread is unmistakeable.

We don't have to live like this. We don't have to accept it. We don't have to upvote it even if we agree (as I do) with the explicit point. The medium is the message, and the message that this poster is putting out here is that online age verification isn't actually worth getting that worked up about.

bakugoyesterday at 5:41 PM

AI-generated content being passed off as human-written is not a tangential issue. HN staff agree, because posting AI generated comments is explicitly forbidden. I suspect the only reason this isn't extended to submissions is because pretty much all articles about AI are also written by AI, and effectively forbidding positive discussion of AI is obviously against the interests of a VC firm.

HN's guidelines were written under the assumption that submitted articles about [thing] would be written by people who care about [thing] and made a good faith effort to write something interesting about [thing], so it's only fair that any comments would be expected to respect the author's effort and discuss the article in equally good faith.

This assumption completely falls apart when you add AI generated submissions into the mix. If the "author" didn't care and thus couldn't be bothered to write about [thing] themselves, choosing to instead outsource that work to an LLM while they supposedly did something they deemed more valuable with the time they would've spent writing, then why should commenters be expected to dedicate more effort into their discussion of the article than the author dedicated to writing it? It's a bit unfair towards the commenters, don't you think?