logoalt Hacker News

marojejianlast Wednesday at 8:43 PM3 repliesview on HN

While I'm as paranoid about LLMs as the next HN'er, there are some silver linings to this research:

1) the LLMs mostly used factual information to influence people (vs. say emotional or social influence) 2) the fact were mostly accurate

I'm not saying we shouldn't worry. But I expected the results to be worse.

Overall, the interesting finding here is that that political opinions can be changed by new information at all. I'm curious how this effect would compare to comparably informed human discussions. I would not be surprised if the LLMs were more effect for at least two reasons:

1) Cost-efficiency, in terms of the knowledge required, and effort/skill to provide personalized arguments. 2) Reduction in the emotional barrier to changing your mind: people don't want to "lose" by being wrong about politics to someone else. But perhaps the machine doesn't trigger this social/tribal response.

Cited papers:

https://www.nature.com/articles/s41586-025-09771-9

https://www.science.org/doi/10.1126/science.aea3884


Replies

techblueberrylast Wednesday at 8:46 PM

I’ll add a third reason, which is I think in general, people are very bad at understanding how to make an argument to someone with a different value system. I’m liberal, I have family members who are conservative, and I’ll read conservative books and I’m genuinely a person who is curious to new ideas, but most people I know(and I’m sure this works vice versa) are only good at expressing political opinions in the language of people who share their values. Republicans and Democrats don’t just talk about different things, they talk about them in very different ways.

I find this online as well, like I hate being “out of my echo chamber” because those arguments are just uniformly pointless. (This is in all directions by the way, people to the right or left of me).

Though I also interestingly find trying to talk to LLMs about competing values challenging too, if I try to get the LLM to explain a conservative position, then I make counter-arguments to that position, it will almost never tell me my counter argument is wrong, just “you’ve hit the nail on the head! Boy are you smart!”

show 2 replies
zemlast Wednesday at 11:47 PM

the scenario that worries me is "fox news but personalised", e.g. fox can run a dozen pieces on "immigrants are taking your jobs" but an LLM hooked into your google profile could generate an article on how "plumbers in nashville are being displaced by low-paid mexicans" that is specifically designed to make you personally fear for your job if the nazi du jour isn't elected.

ekjhgkejhgklast Thursday at 12:00 AM

> the LLMs mostly used factual information to influence people

No, you see. This is how I used to think when I was a teenager.

Democracy isn't about being factually correct. It's about putting in place rules to make accumulation of power to the point that it can bend the rules themselves, very difficult.

It's not a silver lining that LLMs are persuasive by being mostly accurate, if they're used to increase the power of their owner further.