logoalt Hacker News

cortesoftyesterday at 9:51 PM1 replyview on HN

Well, the ‘intentionality’ is of the form of LLM creators wanting to maximize user engagement, and using engagement as the training goal.

The ‘dark patterns’ we see in other places aren’t intentional in the sense that the people behind them want to intentionally do harm to their customers, they are intentional in the sense that the people behind them have an outcome they want and follow whichever methods they find to get them that outcome.

Social media feeds have a ‘dark pattern’ to promote content that makes people angry, but the social media companies don’t have an intention to make people angry. They want people to use their site more, and they program their algorithms to promote content that has been demonstrated to drive more engagement. It is an emergent property that promoting content that has generated engagement ends up promoting anger inducing content.


Replies

tptacektoday at 12:42 AM

Hold on, because what you're arguing is that OpenAI and Anthropic deploy dark patterns, and I have zero doubt that they do. I'm not saying OpenAI has clean hands. I'm saying that on this article's own terms, sycophancy isn't a "dark pattern"; it's a bad thing that happens to be an emergent property both of LLMs generally and, apparently, of RL in particular.

I'm standing up for the idea that not every "bad thing" is a "dark pattern"; the patterns are "dark" because their beneficiaries intentionally exploit the hidden nature of the pattern.

show 1 reply