How do models deal with assessing the quality of content and its accuracy/veracity when recommending products currently? What do the providers do to avoid a situation where more content === more traffic? Would love to see links to relevant research on this, if you have them. much success to you, appreciate your ai slop risk awareness.
There is the preselection, which depends on the fanout queries the model comes up with and the contents performance across those queries on the search index.
After that content is actually assessed by the model. This paper tried different strategies to improve performance for this last step: https://arxiv.org/pdf/2311.09735. Adding statistics, sources, original data are all strategies that we apply.
In classic SEO, creating more and more content leads to "cannibalization". Generally this hurts performance of all overlapping content so much that it is not worth it.