I've started do this on social media. I got "called out" after using big words or using a - in a sentence. So now I write less good on purpose, so whatever I commented doesn't get drawn into a sidetrack off-topic witch-hunt.
As soon as someone yells "witch" you cannot disprove you're not one, and I've even had people put my handwritten comments through "AI detector" websites that "proved" they were AI (they weren't). It literally just highlighted two popular English phases.
LLMs were trained on sites like HN and Reddit, so now if you write like a HN or Reddit commentator, you sound like AI...
Here's one vote for just be the witch if that's what people need from you.
Just make it be what you want to say and how you want to say it. And when they come after you, shame them to the best of your ability or treat them like they are not there.
I don’t think this is a good long term solution. LLMs can do easy language substitutions and you can even force them to add errors. So relying on that alone won’t work as people intentionally make things look more “human.”
> now I write less good on purpose, so whatever I commented doesn't get drawn into a sidetrack off-topic witch-hunt.
I've begun downvoting each and every entry that questions the authenticity of a comment or article.
I don't even bother if the claim is true or not. A text can be AI-generated and interesting, or human-written and dumb.
I have never really gotten the impression that HN or Reddit commentators write in any particular way overall.
LinkedIn, OTOH....
I put a piece of text in one and the only line it flagged is the one line I actually wrote.
AI only uses big words to engage in elegant variation, not to compress information.
If someone calls an article like this a "jeremiad" I know they're a human.