I've only read the abstract, but there is also plenty of evidence to suggest that people trust the output of LLMs more than other forms of media (or that they should). Partially because it feels like it comes from a place of authority, and partially because of how self confident AI always sounds.
The LLM bot army stuff is concerning, sure. The real concern for me is incredibly rich people with no empathy for you or I, having interstitial control of that kind of messaging. See, all of the grok ai tweaks over the past however long.
When I was visiting home last year, I noticed my mom would throw her dog's poop in random peoples' bushes after picking it up, instead of taking it with her in a bag. I told her she shouldn't do that, but she said she thought it was fine because people don't walk in bushes, and so they won't step in the poop. I did my best to explain to her that 1) kids play all kinds of places, including in bushes; 2) rain can spread it around into the rest of the person's yard; and 3) you need to respect other peoples' property even if you think it won't matter. She was unconvinced, but said she'd "think about my perspective" and "look it up" whether I was right.
A few days later, she told me: "I asked AI and you were right about the dog poop". Really bizarre to me. I gave her the reasoning for why it's a bad thing to do, but she wouldn't accept it until she heard it from this "moral authority".
People hate being manipulated. If you feel like you're being manipulated but you don't know by who or precisely what they want of you, then there's something of an instinct to get angry and lash out in unpredictable destructive ways. If nobody gets what they want, then at least the manipulators will regret messing with you.
This is why social control won't work for long, no matter if AI supercharges it. We're already seeing the blowback from decades of advertising and public opinion shaping.
I would go against the grain and say that LLMs take power away from incredibly rich people to shape mass preferences and give to the masses.
Bot armies previously needed an army of humans to give responses on social media, which is incredibly tough to scale unless you have money and power. Now, that part is automated and scalable.
So instead of only billionaires, someone with a 100K dollars could launch a small scale "campaign".
Do you think these super wealthy people who control AI use the AI themselves? Do you think they are also “manipulated” by their own tool or do they, somehow, escape that capture?
AI is wrong so often that anyone who routinely uses one will get burnt at some point.
Users having unflinching trust in AI? I think not.
> Partially because it feels like it comes from a place of authority, and partially because of how self confident AI always sounds.
To add to that, this research paper[1] argues that people with low AI literary are more receptive to AI messaging because they find it magical.
The paper is now published but it's behind paywall so I shared the working paper link.
[1] https://thearf-org-unified-admin.s3.amazonaws.com/MSI_Report...
And just see all of history where totalitarians or despotic kings were in power.
Exactly. On Facebook everyone is stupid. But this is AI, like in the movies! It is smarter than anyone! It is almost like AI in the movies was part of the plot to brainwash us into thinking LLM output is correct every time.
…Also partially because it’s better then most other sources
LLMs haven't been caught actively lying yet, which isn't something that can be said for anything else.
Give it 5yr and their reputation will be in the toilet too.
>people trust the output of LLMs more than other
Theres one paper I saw on this, which covered attitudes of teens. As I recall they were unaware of hallucinations. Do you have any other sources on hand?
When the LLMs output supposedly convincing BS that "people" (I assume you mean on average, not e.g. HN commentariat) trust, they aren't doing anything that's difficult for humans (assuming the humans already at least minimally understand the topic they're about to BS about). They're just doing it efficiently and shamelessly.
> The real concern for me is incredibly rich people with no empathy for you or I, having interstitial control of that kind of messaging. See, all of the grok ai tweaks over the past however long.
Indeed. It's always been clear to me that the "AI risk" people are looking in the wrong direction. All the AI risks are human risks, because we haven't solved "human alignment". An AI that's perfectly obedient to humans is still a huge risk when used as a force multiplier by a malevolent human. Any ""safeguards"" can easily be defeated with the Ender's Game approach.