logoalt Hacker News

crotetoday at 9:02 AM25 repliesview on HN

Note that nothing in the article is AI-specific: the entire argument is built around the cost of persuasion, with the potential of AI to more cheaply generate propaganda as buzzword link.

However, exactly the same applies with, say, targeted Facebook ads or Russian troll armies. You don't need any AI for this.


Replies

SCdFtoday at 9:37 AM

I've only read the abstract, but there is also plenty of evidence to suggest that people trust the output of LLMs more than other forms of media (or that they should). Partially because it feels like it comes from a place of authority, and partially because of how self confident AI always sounds.

The LLM bot army stuff is concerning, sure. The real concern for me is incredibly rich people with no empathy for you or I, having interstitial control of that kind of messaging. See, all of the grok ai tweaks over the past however long.

show 13 replies
smartmictoday at 9:11 AM

But AI is next in line as a tool to accelerate this, and it has an even greater impact than social media or troll armies. I think one lever is working towards "enforced conformity." I wrote about some of my thoughts in a blog article[0].

[0]: https://smartmic.bearblog.dev/enforced-conformity/

show 3 replies
go_elmotoday at 9:12 AM

Good point - its not a previously inexistent mechanism - but AI leverages it even more. A russian troll can put out 10x more content with automation. Genuine counter-movements (e.g. grassroot preferences) might not be as leveraged, causing the system to be more heavily influenced by the clearly pursued goals (which are often malicious)

show 2 replies
zaptheimpalertoday at 9:33 AM

Making something 2x cheaper is just a difference in quantity, but 100x cheaper and easier becomes a difference in kind as well.

show 1 reply
muldvarptoday at 9:46 AM

But the entire promise of AI is that things that were expensive because they required human labor are now cheap.

So if good things happening more because AI made them cheap is an advantage of AI, then bad things happening more because AI made them cheap is a disasvantage of AI.

coldteatoday at 1:15 PM

>Note that nothing in the article is AI-specific: the entire argument is built around the cost of persuasion, with the potential of AI to more cheaply generate propaganda as buzzword link.

That's the entire point, that AI cheapens the cost of persuassion.

A bad thing X vs a bad thing X with a force multiplier/accelerator that makes it 1000x as easy, cheap, and fast to perform is hardly the same thing.

AI is the force multiplier in this case.

That we could of course also do persuassion pre-AI is irrelevant, same way when we talk about the industrial revolution the fact that a craftsman could manually make the same products without machines is irrelevant as to the impact of the industrial revolution, and its standing as a standalone historical era.

t_manntoday at 9:39 AM

Sounds like saying that nothing about the Industrial Revolution was steam-machine-specific. Cost changes can still represent fundamental shifts in terms of what's possible, "cost" here is just an economists' way of saying technology.

tgvtoday at 10:57 AM

That's one of those "nothing to see here, move along" comments.

First, generative AI already changed social dynamics, in spite of facebook and all that being around for more than a decade. People trust AI output, much more than a facebook ad. It can slip its convictions into every reply it makes. Second, control over the output of AI models is limited to a very select few. That's rather different from access to facebook. The combination of those two factors does warrant the title.

gaigalastoday at 10:23 AM

> nothing in the article is AI-specific

Timing is. Before AI this was generally seen as crackpot talk. Now it is much more believable.

show 3 replies
ddlsmurftoday at 12:05 PM

What makes AI a unique new threat is that it do a new kind of both surgical and mass attack: you can now generate the ideal message per target, basically you can whisper to everyone, or each group, at any granularity, the most convincing message. It also removes a lot of language and culture barriers, for ex. Russian or Chinese propaganda is ridiculously bad when it crosses borders, at least when targeting the english speaking world, this is also a lot easier/cheaper.

ekjhgkejhgktoday at 10:53 AM

> Note that nothing in the article is AI-specific

No one is arguing that the concept of persuasion didn't exist before AI. The point is that AI lowers the cost. Yes, Russian troll armies also have a lower cost compared to going door to door talking to people. And AI has a cost that is lower still.

zahlmantoday at 6:09 PM

The thread started with your reasonable observation but degenerated into the usual red-vs-blue slapfight powered by the exact "elite shaping of mass preferences" and "cheaply generated propaganda" at issue.

> Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.

I'm disappointed.

jacquesmtoday at 11:45 AM

That's a pretty typical middle-brow dismissal but it entirely misses the point of TFA: you don't need AI for this, but AI makes it so much cheaper to do this that it becomes a qualitative change rather than a quantitative one.

Compared to that 'russian troll army' you can do this by your lonesome spending a tiny fraction of what that troll army would cost you and it would require zero effort in organization compared to that. This is a real problem and for you to dismiss it out of hand is a bit of a short-cut.

odiroottoday at 12:05 PM

It has been practiced by populist politicians for millennia, e.g. pork barelling.

citrin_rutoday at 9:29 AM

AI (LLM) is a force multiplier for troll armies. For the same money bad actors can brainwash more people.

show 1 reply
rsynnotttoday at 11:57 AM

Making doing bad things way cheaper _is_ a problem, though.

sam-cop-vimestoday at 11:30 AM

Well, AI has certainly made it easier to make tailored propaganda. If an AI is given instructions about what messaging to spread, it can map out a path from where it perceives the user to where its overlords want them to be.

Given how effective LLMs are at using language, and given that AI companies are able to tweak its behaviour, this is a clear and present danger, much more so than facebook ads.

insane_dreamertoday at 3:15 PM

> You don't need any AI for this.

AI accelerates it considerably and with it being pushed everywhere, weaves it into the fabric of most of what you interact with.

If instead of searches you now have AI queries, then everyone gets the same narrative, created by the LLM (or a few different narratives from the few models out there). And the vast majority of people won't know it.

If LLMs become the de-facto source of information by virtue of their ubiquity, then voila, you now have a few large corporations who control the source of information for the vast majority of the population. And unlike cable TV news which I have to go out of my way to sign up and pay for, LLMs are/will be everywhere and available for free (ad-based).

We already know models can be tuned to have biases (see Grok).

kev009today at 11:24 AM

Yup "could shape".. I mean this has been going on time immemorial.

It was odd to see random nerds who hated Bill Gates the software despot morph into acksually he does a lot of good philanthropy in my lifetime but the floodgates are wide open for all kinds of bizarre public behavior from oligarchs these days.

The game is old as well as evergreen. Hearst, Nobel, Howard Huges come to mind of old. Musk with Twitter, Ellison with TikTok, Bezos with Washington Post these days etc. The costs are already insignificant because they generally control other people's money to run these things.

show 1 reply
bjournetoday at 10:28 AM

While true in principle, you are underestimating the potential of ai to sway people's opinions. "@grok is this true" is already a meme on Twitter and it is only going to get worse. People are susceptible to eloquent bs generated by bots.

show 1 reply
dfxm12today at 2:18 PM

It is worth pointing out that ownership of AI is becoming more and more consolidated over time, by elites. Only Elon Musk or Sam Altman can adjust their AI models. We recognize the consolidation of media outlets as a problem for similar reasons, and Musk owning grok and twitter is especially dangerous in this regard. Conversely, buying facebook ads is more democratized.

tim333today at 12:25 PM

Also I think AI at least in its current LLM form may be a force against polarisation. Like if you go on X/twitter and type "Biden" or "Biden Crooked" in the "Explore" thing in the side menu you get loads of abusive stuff including the president slagging him off. Type into "Grok" about those it says Biden was a decent bloke and more "there is no conclusive evidence that Joe Biden personally committed criminal acts, accepted bribes, or abused his office for family gain"

I mention Grok because being owned by a right leaning billionaire you'd think it'd be one of the first to go.

xbmcusertoday at 9:27 AM

[flagged]

justsomejewtoday at 10:06 AM

"Russian troll armies.." if you believe in "Russian troll armies", you are welcome to believe in flying saucers as well..

show 5 replies
pbreittoday at 9:47 AM

Considering that LLMs have substantially "better" opinions than, say, the MSM or social media, is this actually a good thing? Might we avoid the whole woke or pro-Hamas debacles? Maybe we could even move past the current "elites are intrinsically bad" era?

show 2 replies