The reason search got so bad, even pretending google themselves are some beneficial actors, is because it is a directly adversarial process. It is profitable to be higher in search results than you "naturally" would be, so of course people attack it.
Google's entire theory of founding was that you could do better than Yahoo hand picking websites with an algorithm, and pagerank was the demonstration, but IMO that was only possible with a dataset that was non-adversarial because you couldn't "attack" yahoo and friend's processes from the data itself.
The moment that changed, the moment pagerank was used in production, the game was up. As long as you try to use content to judge search ranking, content will be changed, modified, abused, cheated to increase your search rank.
The very moment it becomes profitable to do the same for LLM "search", it will happen. LLMs are rather vulnerable to "attack", and will run into the exact same adversarial environment that nullified the effectiveness of pagerank.
This is orthogonal also to if you believe Google let search be shittier to improve their ad empire. LLM "search" will have exactly this same problem if you believe it exists.
If you build a credit card fraud model on a dataset that contains no attacks, you will build a rather bad fraud model. The same is true of pagerank and algorithmic search.
Oh, that’s an interesting thought, I was really hoping LLMs would break the cycle there but of course there’s no reason to assume they’d be immune to adversarial content optimization.