I use AI for the elements I feel are weak or unclear in the transcription. Sometimes I copy-paste a paragraph into ChatGPT or whatever, to ensure my (aging) thoughts are being communicated in a crystal clear manner. I cannot always point out why I think they are unclear or jumbled.
I don't feel this is an imposition on others. I think it's the opposite. It enhances signal by reducing nitpicking, spelling/grammar errors that might muddle intent, and reminds me of proper sentence structure.
Many of us are guilty of run-ons, fragments, overly large blocks of text[1] because it's closer to how people often converse, verbally. Posts on the internet are not casual conversation between humans. They are exchanges of ideas.
[1] This is a classic example where I had to go back and edit it to ensure it was readable. As you do self-review with any commit ^^
Your “unclear or jumbled” but authentic comment is always better than “feels like chewing sand”, normalised and calibrated LLM outs
I just wrote a similar comment elsewhere, but I would much rather just read your jumbled or unclear writing than whatever's output from an LLM. At least I know you meant at one point the words that are written. It's not a grammar test in English class or an academic paper; if you use a few fragments or run-ons, it's not a big deal.
There is a tradeoff for sure.
But, even though I think slippery slope arguments should be used very sparingly, there is a good case for one here.
Also, learning how to communicate better, and learning to listen better, is a real value add to this site. Which would get washed out if both writing, and therefore reading, were spoon fed by models, who are also washing away individuality of expression and nuance of views.
> Sometimes I copy-paste a paragraph into ChatGPT or whatever, to ensure my (aging) thoughts are being communicated in a crystal clear manner. I
Same here. And sometimes, I got downvoted and treated as an LLM — in the name of valuing the human.
To me, what matters is the will behind the words. Ideas and words themselves are cheap (this becomes clearer every day in the AI age) — they're almost nothing until they're executed and actually help someone.
> "The Dao can be told, but what is told is not the eternal Dao. The Name can be named, but what is named is not the true Name." — Laozi, Dao De Jing
Like code we write — it's dead text on a screen until it's running. And what we really care about is the running effect — and that is exactly the reason, the will, behind why we write the code in the first place.
>I use AI for the elements I feel are weak or unclear in the transcription. Sometimes I copy-paste a paragraph into ChatGPT or whatever, to ensure my (aging) thoughts are being communicated in a crystal clear manner. I cannot always point out why I think they are unclear or jumbled.
Your point is well taken.[0]
Personally, I take a different approach. I use a 5 minute delay for comments on HN so I can look at the post after I submit it, but before anyone else sees it.
This gives me the opportunity to read over my comment and the comment to which I've replied to make sure my prose is decent, my point is clear and any typos or other inaccuracies can be corrected.
I don't use LLMs as an editor as I've found that I'm probably a better editor than the average internet user, which is what LLMs represent.
Perhaps that's arrogant of me, but I'm much more comfortable standing by what I write when it's me writing and editing.
[0] Please note that this is most certainly not a swipe at you or anyone else who uses LLMs as an editor. I just have a different perspective which pushes me in a different direction.
Do we really need to see your every half-baked thought on here though? It's okay not to post or to set a high bar for yourself.
Frankly, even without AI, most communities get degraded as they become more popular and the stream of comments becomes overwhelming. Like there are over 1000 comments on this story and let's be honest, most of it isn't adding value. A great many of them are repeats of other posts, so the person didn't read other people's comments either.
The solutions seem to boil down to making the karma system more draconian. Like instead of focused more on downvoting garbage and upvoting gems, the slush of "mid" posts has to be dealt with somehow. Not sure if rate-limiting accounts would make a noticeable difference. Ironically, perhaps AI is also a solution to the issue, since obviously it can, for example, know all the other comments and could potentially assign some value score in the overall context.
I probably wouldn't post this here post either but I'm hitting reply because of the topic at hand...
I get the sense the point of the HN rule is to preserve unique human expression, regardless of how someone's communication skills are at a given point. Like, I periodically see articles on HN which have stale turns of phrase and signs of poor LLM use (which then becomes distracting while reading) and then the author sometimes mentioning in the HN comments they used an LLM to 'help' with their post based on some list of points they wanted to communicate. Yet when it's relied on too heavily like that it smothers the author's own voice.
If an opinion/idea is being communicated in the voice of another then something unique to that user has been lost. Like if I were to have a germ of an premise and told someone else about it and I found their thoughts clearer and how they expressed it and then copied how they'd expressed it then I think I'd be at least crediting them. Otherwise our own growth with self-editing and clarity will just atrophy and the internet will be a soup of homogenized ways of expressing things.