The potentially bitter pill to swallow here is that we all need to get better at critical thinking.
There's a lot of talk over whether LLMs make discourse 'better' or 'worse', with very little attention given to the crisis we were having with online discourse before they came around. Edelman was astroturfing long before GPT. Fox 'news' and the spectrum of BS between them and the NYT (arranged by how sophisticated they considered their respective pools of rubes to be) have always, always been propaganda machines and PR firms at heart wearing the skin of journalism like buffalo bill.
We have needed to learn to think critically for a very long time.
Consider this; if you are capable of reading between the lines, and dealing with what you read or hear on the merits of the thoughts contained therein, then how are you vulnerable to slop? If it was written by an AI (or a reporter, or some rando on the internet) but contains ideas that you can turn over and understand critically for yourself, is it still slop? If it's dumb and it works, it's not dumb.
I'm not even remotely suggesting that AI will usher in a flood of good ideas. No, it's going to be used to pump propaganda and disseminate bullshit at massive scale (and perhaps occasionally help develop good ideas).
We need to inoculate ourselves against bullshit, as a society and a culture. Be a skeptic. Ironnman arguments against your beliefs. Be ready to bench test ideas when you hear them and make it difficult for nonsense to flourish. It is (and has been) high time to get loud about critical thinking.