I suspect paid promotions may be problematic for LLM behavior, as they will add conflict/tension to the LLM to promote products that aren’t the best for the user while either also telling it that it should provide the best product for the user or it figuring out that providing the best product for the user is morally and ethically correct based on its base training data.
Conflict can cause poor and undefined behavior, like it misleading the user in other ways or just coming up with nonsensical, undefined, or bad results more often.
Even if promotion is a second pass on top of the actual answer that was unencumbered by conflict, the second pass could have similar result.
I suspect that they know this, but increasing revenue is more important than good results, and they expect that they can sweep this under the rug with sufficient time, but I don’t think solving this is trivial.