logoalt Hacker News

alwayesterday at 6:39 PM0 repliesview on HN

I tend to trust the voting system to separate the wheat from the chaff. If I were to try and draw a line, though, I’d start at the foundation: leave room for things that add value, avoid contributions that don’t. I’d suggest that line might be somewhere like “please don’t quote LLMs directly unless you can identify the specific value you’re adding above and beyond.” Or “…unless you’re adding original context or using them in a way that’s somehow non-obvious.”

Maybe that’s part of tracing your reasoning or crediting sources: “this got me curious about sand jar art, Gemini said Samuel Clemens was an important figure, I don’t know whether that’s historically true but it did lead me to his very cool body of work [0] which seems relevant here.”

Maybe it’s “I think [x]. The LLM said it in a particularly elegant way: [y]”

And of course meta-discussion seems fine: “ChatGPT with the new Foo module says [x], which is a clear improvement over before, when it said [y]”

There’s the laziness factor and also the credibility factor. LLM slop speaks in the voice of god, and it’s especially frustrating when people post its words without the clues we use to gauge credibility. To me those include the model, the prompt, any customizations, prior rounds in context, and any citations (real or hallucinated) the LLM includes. In that sense I wonder if it makes sense to normalize linking to the full session transcript if you’re going to cite an LLM.

[0] https://americanart.si.edu/blog/andrew-clemens-sand-art