logoalt Hacker News

Elites could shape mass preferences as AI reduces persuasion costs

379 pointsby 50kIterstoday at 8:38 AM398 commentsview on HN

Comments

themafiatoday at 12:16 PM

It's not about persuading you from "russian bot farms." Which I think is a ridiculous and unnecessarily reductive viewpoint.

It's about hijacking all of your federal and commercial data that these companies can get their hands on and building a highly specific and detailed profile of you. DOGE wasn't an audit. It was an excuse to exfiltrate mountains of your sensitive data into their secret models and into places like Palantir. Then using AI to either imitate you or to possibly predict your reactions to certain stimulus.

Then presumably the game is finding the best way to turn you into a human slave of the state. I assure you, they're not going to use twitter to manipulate your vote for the president, they have much deeper designs on your wealth and ultimately your own personhood.

It's too easy to punch down. I recommend anyone presume the best of actual people and the worst of our corporations and governments. The data seems clear.

show 14 replies
crotetoday at 9:02 AM

Note that nothing in the article is AI-specific: the entire argument is built around the cost of persuasion, with the potential of AI to more cheaply generate propaganda as buzzword link.

However, exactly the same applies with, say, targeted Facebook ads or Russian troll armies. You don't need any AI for this.

show 25 replies
spooky_deeptoday at 9:21 AM

They already are?

All popular models have a team working on fine tuning it for sensitive topics. Whatever the companies legal/marketing/governance team agree to is what gets tuned. Then millions of people use the output uncritically.

show 2 replies
everdrivetoday at 10:42 AM

It's important to remember that being a "free thinker" often just means "being weird." It's quite celebrated to "think for yourself" and people always connect this to specific political ideas, and suggest that free thinkers will have "better" political ideas by not going along with the crowd. On one hand, this is not necessarily true; the crowd could potentially have the better idea and the free thinker could have some crazy or bad idea.

But also, there is a heavy cost to being out of sync with people; how many people can you relate to? Do the people you talk to think you're weird? You don't do the same things, know the same things, talk about the same things, etc. You're the odd man out, and potentially for not much benefit. Being a "free thinker" doesn't necessarily guarantee much of anything. Your ideas are potentially original, but not necessarily better. One of my "free thinker" ideas is that bed frames and box springs are mostly superfluous and a mattress on the ground is more comfortable and cheaper. (getting up from a squat should not be difficult if you're even moderately healthy) Does this really buy me anything? No. I'm living to my preferences and in line with my ideas, but people just think it's weird, and would be really uncomfortable with it unless I'd already built up enough trust / goodwill to overcome this quirk.

show 3 replies
notepad0x90today at 9:00 AM

ML has been used for influence for like a decade now right? my understanding was that mining data to track people, as well as influencing them for ends like their ad-engagement are things that are somewhat mature already. I'm sure LLMs would be a boost, and they've been around with wide usage for at least 3 years now.

My concern isn't so much people being influenced on a whim, but people's beliefs and views being carefully curated and shaped since childhood. iPad kids have me scared for the future.

show 1 reply
euroderftoday at 10:25 AM

Thanks to social media and AI, the cost of inundating the mediasphere with a Big Lie (made plausible thru sheer repetition) has been made much more affordable now. This is why the administration is trumpeting lower prices!

show 2 replies
taurathtoday at 8:48 AM

We have no guardrails on our private surveillance society. I long for the day that we solve problems facing regular people like access to education, hunger, housing, and cost of living.

show 3 replies
asimtoday at 9:27 AM

I recently saw this https://arxiv.org/pdf/2503.11714 on conversational networks and it got me thinking that a lot of the problem with polarization and power struggle is the lack of dialog. We consume a lot, and while we have opinions too much of it shapes our thinking. There is no dialog. There is no questioning. There is no discussion. On networks like X it's posts and comments. Even here it's the same, it's comments with replies but it's not truly a discussion. It's rebuttals. A conversation is two ways and equal. It's a mutual dialog to understand differing positions. Yes elite can reshape what society thinks with AI, and it's already happening. But we also have the ability to redefine our networks and tools to be two way, not 1:N.

show 3 replies
zkmontoday at 9:18 AM

It's about enforcing single-minded-ness across masses, similar to soldier training.

But this is not new. The very goal of a nation is to dismantle inner structures, independent thought, communal groups etc across population and and ingest them as uniformed worker cells. Same as what happens when a whale swallows smaller animals. The structures will be dismantled.

The development level of a country is a good indicator of progress of this digestion of internal structures and removal of internal identities. More developed means deeper reach of the policy into people's lives, making each person as more individualistic, rather than family or community oriented.

Every new tech will be used by the state and businesses to speed up the digestion.

show 5 replies
bravetravelertoday at 9:08 AM

When I was a kid, I had a 'pen pal'. Turned out to actually be my parent. This is why I have trust issues and prefer local LLMs

show 4 replies
t43562today at 1:58 PM

The internet has turned into a machine for influencing people already through adverts. Businesses know it works. IMO this is the primary money making mode of the internet and everything else rests on it.

A political or social objective is just another advertising campaign.

Why invest billions in AI if it doesn't assist in the primary moneymaking mode of the internet? i.e. influencing people.

Tiktok - banned because people really believe that influence works.

show 1 reply
lambdaonetoday at 10:15 AM

I think this ship has already sailed, with a lot of comments on social media already being AI-generated and posted by bots. Things are only going to get worse as time goes on.

I think the next battleground is going to be over steering the opinions and advice generatd by LLMs and other models by poisoning the training set.

csvparsertoday at 9:11 AM

I suspect paid promotions may be problematic for LLM behavior, as they will add conflict/tension to the LLM to promote products that aren’t the best for the user while either also telling it that it should provide the best product for the user or it figuring out that providing the best product for the user is morally and ethically correct based on its base training data.

Conflict can cause poor and undefined behavior, like it misleading the user in other ways or just coming up with nonsensical, undefined, or bad results more often.

Even if promotion is a second pass on top of the actual answer that was unencumbered by conflict, the second pass could have similar result.

I suspect that they know this, but increasing revenue is more important than good results, and they expect that they can sweep this under the rug with sufficient time, but I don’t think solving this is trivial.

narratortoday at 9:02 AM

Everyone can shape mass preferences because propaganda campaigns previously only available to the elite are now affordable. e.g Video production.

show 1 reply
HPsquaredtoday at 9:51 AM

AI alignment is a pretty tremendous "power lever". You can see why there's so much investment.

zingartoday at 10:16 AM

My neighbour asked me the other day (well, more stated as a "point" that he thought was in his favour): "how could a billionaire make people believe something?" The topic was the influence of the various industrial complexes on politics (my view: total) and I was too shocked by his naivety to say: "easy: buy a newspaper". There is only one national newspaper here in the UK that is not controlled by one of four wealthy families, and it's the one newspaper whose headlines my neighbour routinely dismisses.

The thought of a reduction in the cost of that control does not fill me with confidence for humanity.

sega_saitoday at 11:50 AM

Given the increasing wealth inequality, it is unclear if costs are really a factor here, as amounts like 1M$ is nothing when you have 1B$.

phbatoday at 12:29 PM

> AI enables precision influence at unprecedented scale and speed.

IMO this is the most important idea from the paper, not polarization.

Information is control, and every new medium has been revolutionary with regards to its effects on society. Up until now the goal was to transmit bigger and better messages further and faster (size, quality, scale, speed). Through digital media we seem to have reached the limits of size, speed and scale. So the next changes will affect quality, e.g. tailoring the message to its recipient to make it more effective.

This is why in recent years billionaires rushed to acquire media and information companies and why governments are so eager to get a grip on the flow of information.

Recommended reading: Understanding Media by Marshall McLuhan. While it predates digital media, the ideas from this book remain as true as ever.

niemandhiertoday at 9:19 AM

We already see this, but not due to classical elites.

Romanian elections last year had to be repeated due to massive bot interference:

https://youth.europa.eu/news/how-romanias-presidential-elect...

show 1 reply
xdavidliutoday at 1:48 PM

when Elon bought twitter, I incorrectly assumed that this was the reason. (it may still have been the intended reason, but it didnt seem to play out that way)

andaitoday at 2:40 PM

Wait, who was shaping my preferences before?

tchock23today at 2:46 PM

Researchers just demonstrated that you can use LLMs to simulate human survey takers with 99% ability to bypass bot detection and a relatively low cost ($0.05/complete). At scale, that is how ‘elites’ shape mass preferences.

jl6today at 8:45 AM

> Historically, elites could shape support only through limited instruments like schooling and mass media

Schooling and mass media are expensive things to control. Surely reducing the cost of persuasion opens persuasion up to more players?

show 5 replies
noobermintoday at 10:03 AM

May be I'm just ignorant, but I tried to skim the beginning of this, and it's honestly just hard to even accept their set-up. Like, the fact that any of the terms[^] (`y`, `H`, `p`, etc) are well defined as functions that can map some range of the reals is hard to accept. Like in reality, what "an elite wants," the "scalar" it can derive from pushing policy 1, even the cost functions they define seem to not even be definable as functions in a formal sense and even the co-domain of said terms cannot map well to a definable set that can be mapped to [0,1].

All the time in actual politics, elites and popular movements alike find their own opinions and desires clash internally (yes, even a single person's desires or actions self-conflict at times). A thing one desires at say time `t` per their definitions doesn't match at other times, or even at the same `t`. This is clearly an opinion of someone who doesn't read these kind of papers, but I don't know how one can even be sure the defined terms are well-defined so I'm not sure how anyone can even proceed with any analysis in this kind of argument. They write it so matter-of-fact-ly that I assume this is normal in economics. Is it?

Certain systems where the rules a bit more clear might benefit from formalism like this but politics? Politics is the quintessential example of conflicting desires, compromise, unintended consequences... I could go on.

[^] calling them terms as they are symbols in their formulae but my entire point is they are not really well defined maps or functions.

keiferskitoday at 9:05 AM

Yeah, I don't think this really lines up with the actual trajectory of media technology, which is going in the complete opposite direction.

It seems to me that it's easier than ever for someone to broadcast "niche" opinions and have them influence people, and actually having niche opinions is more acceptable than ever before.

The problem you should worry about is a growing lack of ideological coherence across the population, not the elites shaping mass preferences.

show 2 replies
komali2today at 9:00 AM

Oh man I've been saying this for ages! Neal Stephenson called this in "Fall, or Dodge in Hell," wherein the internet is destroyed and society permanently changed when someone releases a FOSS botnet that anyone can deploy that will pollute the world with misinformation about whatever given topic you feed it. In the book, the developer kicks it off by making the world disagree about whether a random town in Utah was just nuked.

My fear is that some entity, say a State or ultra rich individual, can leverage enough AI compute to flood the internet with misinformation about whatever it is they want, and the ability to refute the misinformation manually will be overwhelmed, as will efforts to refute leveraging refutation bots so long as the other actor has more compute.

Imagine if the PRC did to your country what it does to Taiwan: completely flood your social media with subtly tuned han supremacist content in an effort to culturally imperialise us. AI could increase the firehose enough to majorly disrupt a larger country.

baxtrtoday at 9:18 AM

Interestingly, there was a discussion a week ago on "PRC elites voice AI-skepticism". One commentator was arguing that:

As the model get's more powerful, you can't simply train the model on your narrative if it doesn't align with real data/world. [1]

So at least on the model side it seems difficult to go against the real world.

[1] https://news.ycombinator.com/item?id=46050177

davidutoday at 1:54 PM

"Historically, elites could shape support only through limited instruments like schooling and mass media"

Well, I think the author needs to understand a LOT more about history.

intermerdatoday at 8:45 AM

https://newrepublic.com/post/203519/elon-musk-ai-chatbot-gro...

> Musk’s AI Bot Says He’s the Best at Drinking Pee and Giving Blow Jobs

> Grok has gotten a little too enthusiastic about praising Elon Musk.

show 1 reply
emsigntoday at 9:30 AM

That's the plan. Culture is losing authenticity due to the constant rumination of past creative works, now supercharged with AI. Authentic culture is deemed a luxury now as it can't compete in the artificial tech marketplaces and people feel isolated and lost because culture loses its human touch and relatability.

That's why the billionaires are such fans of fundamentalist religion, they then want to sell and propagate religion to the disillusioned desperate masses to keep them docile and confused about what's really going on in the world. It's a business plan to gain absolute power over society.

arthurfirsttoday at 12:15 PM

Most 'media' is produced content designed to manipulate -- nothing new. The article isn't really AI specific as others have said.

Personally my fear based manipulation detection is very well tuned and that is 95% of all the manipulations you will ever get from so-called 'elites' who are better called 'entitled' and act like children when they do not get their way.

I trust ChatGPT for cooking lessons. I code with Claude code and Gemini but they know where they stand and who is the boss ;)

There is never a scenario for me where I defer final judgment on anything personally.

I realize others may want to blindly trust the 'authorities' as its the easy path, but I cured myself of that long before AI was ever a thing.

Take responsibility for your choices and AI is relegated to the role of tool as it should be.

show 1 reply
boxedtoday at 11:42 AM

I don't think "persuasion" is the key here. People change political preferences based on group identity. Here AI tools are even more powerful. You don't have to persuade anyone, just create a fake bandwagon.

verisimitoday at 9:23 AM

Big corps ai products have the potential to shape individuals from cradle to grave. Especially as many manage/assist in schooling, are ubiquitous on phones.

So, imagine the case where an early assessment is made of a child, that they are this-or-that type of child, and that therefore they respond more strongly to this-or-that information. Well, then the ai can far more easily steer the child in whatever direction they want. Over a lifetime. Chapters and long story lines, themes, could all play a role to sensitise and predispose individuals into to certain directions.

Yeah, this could be used to help people. But how does one feedback into the type of "help"/guidance one wants?

jmyeettoday at 3:11 PM

What's become clear is we need to bring Section 230 into the modern era. We allow companies to not be treated as publishers for user-generated content as long as they meet certain obligations.

We've unfortunately allowed tech companies to get away with selling us this idea that The Algoirthm is an impartial black box. Everything an algorithm does is the result of a human intervening to change its behavior. As such, I believe we need to treat any kind of recommendation algorithm as if the company is a publisher (in the S230 sense).

Think of it this way: if you get 1000 people to submit stories they wrote and you choose which of them to publish and distribute, how is that any different from you publishing your own opinions?

We've seen signs of different actors influencing opinion through these sites. Russian bot farms are probably overplayed in their perceived influence but they're definitely a thing. But so are individual actors who see an opportunity to make money by posting about politics in another country, as was exposed when Twitter rolled out showing location, a feature I support.

We've also seen this where Twitter accounts have been exposed as being ChatGPT when people have told them to "ignore all previous instructions" and to give a recipe.

But we've also seen this with the Tiktok ban that wasn't a ban. The real problem there was that Tiktok wasn't suppressing content in line with US foreign policy unlike every other platform.

This isn't new. It's been written about extensively, most notably in Manufacturing Consent [1]. Controlling mass media through access journalism (etc) has just been supplemented by AI bots, incentivized bad actors and algorithms that reflect government policy and interests.

[1]: https://en.wikipedia.org/wiki/Manufacturing_Consent

flipgimbletoday at 4:03 PM

The "Epstein class" of multi-billionaires don't need AI at all. They hire hundreds of willing human grifters and make them low-millionaires by spewing media that enables exploitation and wealth extraction, and passing laws that makes them effectively outside the reach of the law.

They buy out newspapers and public forums like Washington Post, Twitter, Fox News, the GOP, CBS etc. to make them megaphones for their own priorities, and shape public opinion to their will. AI is probably a lot less effective than whats been happening for decades already

billy99ktoday at 1:17 PM

Tech companies already shape elections by intentionally targeting campain ads and political information returned in heavily biased search results.

Why are we worried about this now? Because it could sway people in the direction you don't like?

I find that the tech community and most people in general deny or don't care about these sorts of things when it's out of self interest, but are suddenly rights advocates when someone they don't like might is using the same tactics.

show 1 reply
delichontoday at 9:34 AM

There is nothing we could do to more effectively hand elites exclusive control of the persuasive power of AI than to ban it. So it wouldn't be surprising if AI is deployed by elites to persuade people to ban itself. It could start with an essay on how elites could use AI to shape mass preferences.

syngrog66today at 3:06 PM

This is obvious. No need for fancy academic-ish paper.

LLMs & GenAI in general have already started to be used to automate the mass production of dishonest, adversarial propaganda and disinfo (eg. lies and fake text, images, video.)

It has and will be used by evil political influencers around the world.

tonyhart7today at 8:52 AM

this is next level algorithm

imagine someday there is a child that trust chatgpt more than his mother

show 3 replies
MangoToupetoday at 8:55 AM

> Historically, elites could shape support only through limited instruments like schooling and mass media

What is AI if not a form of mass media

show 2 replies
nathiastoday at 12:44 PM

It goes both ways, because AI reduces persuasion cost, not only elites can do it. I think its most plausible that in the future there will be multitudes of propaganda bots aimed at any user, like advanced and hyper-personalized ads.

emsigntoday at 12:55 PM

Chatbots are poison for your mind. And now another method hast arrived to fuck people up, not just training your reward system to be lazy and let AI solve your life's issue, now it's also telling you who to vote for. A billionaire's wet dream,

m2426821today at 9:21 AM

[dead]

nQQKTz7dm27oZtoday at 9:39 AM

[dead]

black_13today at 9:18 AM

[dead]

baiwltoday at 9:38 AM

[flagged]

camillomillertoday at 9:09 AM

What people are doing with AI in terms of polluting the collective brain reminds of what you could do with a chemical company in the 50s and 60s before the EPA was established. Back then Nixon (!!!) decided it wasn't ok that companies could cut costs by hurting the environment. Today the riches Western elites are all behind the instruments enabling the mass pollution of our brains, and yet there is absolutely noone daring to put a limit to their capitalistic greed. It's grim, people. It's really grim.

yegortktoday at 9:13 AM

“Elites are bad. And here is a spherical cow to prove it.”

andrewclunntoday at 3:08 PM

Diminishing returns. Eventually real world word of mouth and established trusted personalities (individuals) will be the only ones anyone trusts. People trusted doctors, then 2020 happened, and now they don't. How many ads get ignored? Doesn't matter if the cost is marginal if the benefit is almost nothing. Just a world full of spam that most people ignore.