logoalt Hacker News

roywigginslast Tuesday at 7:02 PM1 replyview on HN

Don't love how ChatGPT the readme is, the bullet points under "Why AIsbom?" are very, very ChatGPT.


Replies

xpelast Tuesday at 8:04 PM

I will preemptively grant the narrow point that if a project demonstrates poor quality in its code or text (i.e. what I mean when I say "slop"), it can dissuade potential users. However, the "Why AIsbom?" section strikes me as clear and informative.

Many people prefer human writing. I get it, and I think I understand most of the underlying reasons and emotional drives. [1]

Nevertheless, my top preference (I think?) is clarity and accuracy. For technical writing, if these two qualities are present, I'm rarely bothered by what people may label "AI writing". OTOH, when I see sloppy, poorly reasoned, out-of-date writing, my left hand readies itself for ⌘W. [2]

A suggestion for the comment above, which makes a stylistic complaint: be more specific about what can be improved.

Finally, a claim: over time, valid identification of some text as being AI-generated will require more computation and be less accurate. [3]

[1]: Food for thought: https://theconversation.com/people-say-they-prefer-stories-w... and the backing report: https://docs.iza.org/dp17646.pdf

[2]: To be open, I might just have a much higher than average bar for precision -- I tend to prefer reading source materials than derivative press coverage, and I prefer reading a carefully worded, dry written documentation file over an informal chat description. To keep digging the hole for myself, I usually don't like the modern practice of putting unrelated full-width pictures in blog posts because they look purdy. Maybe it comes from a "just the facts, please" mentality when reading technical material.

[3]: I realize this isn't the clearest testable prediction, but I think the gist of it is falsifiable.