logoalt Hacker News

altairprimeyesterday at 10:29 PM1 replyview on HN

> what if it turns out that

HN need not offer itself up as a Petri dish for AI writing experimentation. There are startups in that space, and at least one must be YC-funded, statistically speaking. Come back with the outcomes of the experiment you describe and make a case that they should change the rule. Maybe they will! As of today, though, they are apparently unconvinced.

> the average quality might even go down

We have a recent concrete analysis of Show HN indicating support for this possibility, resulting in the mods banning new users for posting to Show HN (something they’ve probably been resisting for close to twenty years, I imagine, given how frequent a spam vector that must be).

> Perhaps you’ll say that human+LLM text will never be as high-quality as human alone

Please don’t put words in my mouth, insinuating the tone my reply before I’ve made it, and then use that rhetorical device to introduce a flamebait tangent to discredit me with. I’ve made no claims about future capabilities here and I’m not going to address this irrelevance further.

> in the long term, we will have to come up with more sophisticated criteria

Our current criteria seem sophisticated already. Perhaps you could make a case that AI-assisted writing helps avoid guideline violations. This one tends to be especially difficult for us all today:

”Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith. Eschew flamebait. Avoid generic tangents.”


Replies

GMoromisatoyesterday at 11:12 PM

> Please don’t put words in my mouth, insinuating the tone my reply before I’ve made it, and then use that rhetorical device to introduce a flamebait tangent to discredit me with. I’ve made no claims about future capabilities here and I’m not going to address this irrelevance further.

I apologize--the "you" I meant was the person currently reading my post, not the person I was replying to. I was merely trying to answer a common objection that I've heard.

> HN need not offer itself up as a Petri dish for AI writing experimentation.

I'm not sure HN has a choice. I don't think we can prevent posters from experimenting with LLMs to post on HN--even if they adhere to the guidelines. For example, can I ask the LLM to come up with the strongest argument it can and then re-write it in my own words? That seems to be allowed by the guidelines. Would someone even be able to tell that's what I did? [NOTE: I did not do that.]

I think you're arguing that we should not encourage even more use of LLMs on HN. I get that. But I feel like that this community is uniquely qualified to search for better solutions.

> Our current criteria seem sophisticated already.

I hope you're right! That implies that you believe the current guidelines are sufficient to keep HN as the place we all love despite the assault from LLMs. I'm skeptical, but I've been wrong plenty of times!

show 1 reply