I think that this requires some nuance. Was the post generated with a simple short prompt that contributed little? Sure, it's probably slop.
But if the post was generated through a long process of back-and-forth with the model, where significant modifications/additions were made by a human? I don't think there's anything wrong with that.
One problem is that it's exceedingly difficult to tell, as a reader, which scenario you have encountered.
You do you.
I do think there's a great deal wrong with that, and I won't read it at all.
Human can speak unto human unless there's language barrier. I am not interested in anyone's mechanically-recovered verbiage, no matter how much they massaged it.
I don't see what value the LLM would add - writing itself isn't that hard. Thinking is hard, and outsourcing that to an LLM is what people dislike.