I agree with you, but...
> Blogging, sharing blog posts, reading them, commenting on them--these are all acts of human communication.
Not anymore. Bots are now the majority of producers and consumers of all content on the internet. The social contract you mention has been broken for years, and this new technology has further cemented that.
Those of us who value communication with humans will have to find other platforms where content authorship is strictly regulated, or, at the very least, where tools are provided to somewhat reliably filter out machine-generated content. Or retreat from public spaces altogether.
FWIW I have very little hope that this issue will be addressed on HN, considering [1].
It's in a lot of people's interest to keep platforms like HN free of LLM spam, frankly. It's in our interest as people who want to keep our discussion site for actual human discussion (though from the other comments in this thread, this sentiment isn't universally shared, god knows why). It's also in the interest of AI companies since if they destroy internet spaces like this they lose valuable future training data. So I'm (perhaps foolishly) optimistic--or at least not completely pessimistic--that there's hope yet for us.
Incidentally I foresee similar issues to this training data pollution arising with LLM coding taking over software engineering--which it inevitably is going to continue to do, at least in the short term. If LLMs torpedo human engineering, who is going to create the new infrastructure (tools, frameworks, programming languages, etc) that LLMs are making such good use of today? It feels to me like we risk technological stagnation as our collective skills atrophy and the market value of our skills plummets. Kind of like airplane pilots forgetting how to debug planes or handle edge cases because they just rely on autopilot all the time.