I think you shouldn't launder LLM output as your own, but in AI model discussion and new release threads it can be useful to highlight examples of outputs from LLMs. The framing and usage is a key element: I'm interested in what kinds of things people are trying. Using LLM output as a substitute for engagement isn't interesting, but combining a bunch of responses to highlight differences between models could be interesting.
I think sometimes it's fine to source additional information from an LLM if it helps advance the discussion. For example, if I'm confused about some topic, I might explore various AI responses and look at the source links they provide. If any of the links seem compelling I'll note how I found the link through an LLM and explain how it relates to the discussion.