Nice observation about AI-generated content:
> I’ve had the idea that from a social perspective it’d be regarded like plastic surgery, in that it only looks weird when its over-done, or done badly.
It's the same way with writing as with video. There are some videos now where it's actually hard to tell. You can only tell it's AI when it's bad. When it's done well, you don't even know it's AI.
So it creates this selection effect where people only associate AI with fake and bad. The good stuff, they don't associate with AI at all.
It's funny you mention that. The only difference is sometimes you need a functionality without doing the plumbing. At the end of the day if you're getting the output you need, the process doesn't matter. It's an interesting analogy but only works if the inspector is another expert dev.
An important aspect of comparison is that nobody is going to tell you that your surgery is noticeable or looks bad.
Your friends, family, partners, coworkers, aren't going to say anything, neither are people you meet casually, certainly not service workers, strangers aren't going to pull you aside to tell you the truth about your nose job, etc.
I hope the same social taboo doesn't transfer over to AI content. We should honestly critique AI generated content, used either in-whole or in-part with human creations. If the inclusion of AI content botched your article, saying so should be socially acceptable.
We saw some of this here on HN. It used to be that when AI content would be submitted here, it was a social faux pas to even mention it was LLM generated, same thing with LLM generated comments, no matter how obvious it was. Mentioning a comment was AI was socially verboten and you'd be finger-wagged at.
Eventually, AI fatigue caused the community to discount Show HN entries, submissions and comments, and the signal to noise ratio could no longer be ignored.
Now, turn on showdead. Those same comments, that users were expected to interact with as if they were made in good faith by real people, litter every submission's comment section. These comments objectively hurt discussion and it's a good thing they're shadowbanned.
Culturally, I hope we can reach a point where critique of AI content, including code, doesn't brand critics as haters, Luddites, or worse, and stifle conversation about what our communities really value and want.