It is no surprise, somehow they need to earn money. It will be interesting though how much the response of the LLM will be adapted. At least legally advertisement need to be marked for users. So either the response of an LLM will be extended with ad content or replaced by ad content.
> It is no surprise, somehow they need to earn money
I kinda hate that a move needs to be surprising to be noteworthy or critiqued. If tomorrow Meta leaks all data of all users I really wish the reactions aren't "not surprised" and instead "hang them and tar them".
Same way, the need to earn money shouldn't be an excuse for whatever a company does. I'd be a lot more interested in knowing if/why you think it will be a net positive for society and why it should be left to happen.
I am pretty sure you can figure massive loopholes like how it's legal to train the model on stolen data but not to steal data etc. For instance advertisers can push model benchmarks that favours some opinions, based on a biased selection of research papers. I think we've only seen the beginnings of what intricate business models can be figured for an AI company, it's much more convoluted than a search engine or even a social network.