logoalt Hacker News

erelongtoday at 2:46 AM0 repliesview on HN

No:

I actually kind of find it surprising that this post and the top comments saying "yes" even exist because I think the answer should be so firmly "no", but I'll explain what I like to post elsewhere using AI (edit: and some reasons why I think LLM output is useful):

1. A unique human made prompt

2. AI output, designated as "AI says:". This saves you tokens and time copying and pasting over to get the output yourself, and it's really just to give you more info that you could argue for or against in the conversation (adds a lot of "value" to consider to the conversation).

3. Usually I do some manual skimming and trimming of the AI output to make sure it's saying something I'd like to share; just like I don't purely "vibe code" but usually kind of skim output to make sure it's not doing something "extremely bad". The "AI says:" disclaimer makes clear that I may have missed something, but usually there's useful information in the output that is probably better or less time consuming than doing lots of manual research. It's literally like citing Wikipedia or a web search and encouraging you to cross-check the info if it sounds questionable, but the info is good enough most of the time such that it seems valuable to share it.

Other points:

A. The AI-generated answers are just so good... it feels akin to people here not using AI to program (while I see a lot of posts posting otherwise that they have had a lot of positive experiences with using AI to program). It's really the same kind of idea. I think the key is in "unique prompts", that's the human element in the discussion. Essentially I am sharing "tweets" (microblogs) and then AI-generated essays about the topic (so maybe I have a different perspective on why I think this is totally acceptable, as you can always just scroll past AI output if it's labeled as such?). Maybe it makes more sense in context to me? Even for this post, you could have asked an AI "what are the pros and cons of allowing people to use LLM output to make comments" (a unique human prompt to add to the conversation) and then pasted AI output for people to consider the pros and cons of allowing such comments, and I'd anticipate doing this would generate a "pretty good essay to read".

B. This is kind of like in schools, AI is probably going to force them to adapt somehow because you could just add to a prompt to "respond in such a way as to be less detectable to a human" or something like that. At some point it's impossible to tell if someone is "cheating" in school or posting LLM output on to the comments here. But you don't need to despair because what's ultimately important on forum comments is that the information is useful, and if LLM output is useful then it will be upvoted. (In other concerning news related to this, I'm pretty sure they're working on how to generate forum posts and comments without humans being involved at all!)

So I guess for me the conversation is more how to handle LLM output and maybe for people to learn how to comment or post with AI assistance (much like people are learning to code with AI assistance), rather than to totally ban it (which to me seems very counter-productive).

edit: (100% human post btw!)