I feel like it's a popular opinion (I've seen it many times) that it's intentional with the reasoning that it does much better on human-in-the-loop benchmarks (e.g. lm arena) when it's sycophantic.
(I have no knowledge of whether or not this is true)
I'm sure there are a lot of "dark patterns" at play at the frontier model companies --- they're 10-figure businesses engaging directly with consumers and they're just a couple years old, so they're going to throw everything at the wall they can to see what sticks. I'm certainly not sticking up for OpenAI here. I'm just saying this article refutes its own central claim.
It was an accident at first. Not so much now.
OpenAI has explicitly curbed sycophancy in GPT-5 with specialized training - the whole 4o debacle shook them - and then they re-tuned GPT-5 for more sycophancy when the users complained.
I do believe that OpenAI's entire personality tuning team should be fired into the sun, and this is a major reason why.