> And it gets worse because LLMs destroy that signal in one direction - towards homogeneity.
Oh, right, yes, if you're not careful they can definitely do that.
But look at what julius_eth_dev is actually saying they're doing:
> "rubber-ducking architecture decisions, pressure-testing arguments before I post them."
That's more like using the LLM as a sparring partner; they're not having the LLM write their comments for them.
I thought you were going to go somewhere really interesting actually, like maybe 'the LLM convinces you that their arguments are better than yours, and now you're acting like a meat puppet.' Or something equally slightly alarming and cool like that! ;-)
https://news.ycombinator.com/item?id=47331891
> "Error: Reached max turns (1)"
Or. You know... Not at all. I mean, their argument happened to be good. But I have doubts they're telling the truth here.
(flagging the comment makes it dead, but that also hides the substantive discussion that came afer, I'm genuinely not sure what the best move is here)