logoalt Hacker News

gensymyesterday at 8:24 PM2 repliesview on HN

That's a fair question, so I'll try as best I can. And maybe this will serve as a meta-example for me because it is hard to explain.

In a real discussion, the messiness is an important signal. The mistakes that you made and _didn't_ catch, the clunky word choices, etc, give insight actually show what you are thinking and how clearly you are thinking about it. If you have edited something for clarity, that's an important signal. LLM editing destroys that signal.

And it gets worse because LLMs destroy that signal in one direction - towards homogeneity. They create the illusion of "what you were actually thinking, but better than you could express it" but what they are delivering is "generic, professional-sounding ideas phrased in a way to convince you they are your own".


Replies

Kim_Bruningyesterday at 8:52 PM

> And it gets worse because LLMs destroy that signal in one direction - towards homogeneity.

Oh, right, yes, if you're not careful they can definitely do that.

But look at what julius_eth_dev is actually saying they're doing:

> "rubber-ducking architecture decisions, pressure-testing arguments before I post them."

That's more like using the LLM as a sparring partner; they're not having the LLM write their comments for them.

I thought you were going to go somewhere really interesting actually, like maybe 'the LLM convinces you that their arguments are better than yours, and now you're acting like a meat puppet.' Or something equally slightly alarming and cool like that! ;-)

show 1 reply
fluffybucktsnekyesterday at 8:51 PM

I get what you are saying, but I disagree on the last part, "[...] way to convince you they are your own". If it managed to convince the author that it is their own, chances are, it is their own. Specially so if the author does review and edit the output prior to posting it.

The messiness may show glimpses of the process, but, in isolation, will likely distort and corrupt the desired message via partial framing.