logoalt Hacker News

I won a championship that doesn't exist

80 pointsby SEJeffyesterday at 8:38 PM57 commentsview on HN

Comments

simonwyesterday at 9:07 PM

You don't need to vandalize Wikipedia to get this kind of thing to work.

Back in September 2024 I named a whale "Teresa T" with just a blog entry and a YouTube video caption: https://simonwillison.net/2024/Sep/8/teresa-t-whale-pillar-p...

(For a few glorious weeks if you asked any search-enabled LLM, including Google search previews, for the name of the whale in the Half Moon Bay harbor it confidently replied Teresa T)

show 6 replies
nicole_expressyesterday at 10:36 PM

It's an odd thing here, because I don't really understand why this is LLM-specific at all. If someone came up to me and asked "who's the 6 Nimmt world champion?" I'd google it and probably find the same result, and have no reason not to believe it. I mean, for all I know the game is being made up too, though it has more sources at least.

show 3 replies
xeeeeeeeeeeenuyesterday at 10:00 PM

The key to successful poisoning attacks is to introduce brand new information that doesn't directly contradict other training data. It's much easier to convince the LLMs that you're the king of a fictional Mapupu kingdom than the president of the United States.

So this means that for bad actors it's more efficient to manufacture brand new fake stories instead of trying to distort the real ones. Don't produce fake articles absolving yourself of a crime, instead produce fake articles accusing your opponent of 100 different things. Then people will fact-check the accusations using LLMs, and since all the sources mentioning those accusations are controlled by you, the LLMs will confirm them.

_carbyau_yesterday at 11:30 PM

One of the problems with labelling automation as AI.

People think that whatever information an "AI" spits out has gone through a round of critical thinking which enhances the trust value of that information.

The early LLM's using groomed data may have had such critical thinking somewhere in the pipeline. So it was already not really trustworthy.

And now? Using agents to search the internet for you?...

Garbage in, garbage out still applies in computing as ever.

blobbersyesterday at 9:53 PM

This is basically the same problem of products astroturfing reddit, or SEO optimizing google. You want a new X, and so they heavily go after the keywords associated with it.

This is sort of why "brand" matters; it provides a source of trust.

Encyclopedia Britannica used to be that source of 'facts'. Then it became whatever page-rank told you. Eventually SEO optimization ruined that.

News stories are the same thing. For certain groups, they have their 'independent' publication whose reporting they trust.

show 1 reply
billypilgrimyesterday at 9:52 PM

I must say I expected an actual poisoning of the data used to train the LLM and was excited, but the examples indicate that the LLM just searched the web and reported what it found? When you create a website with fake information and search Google for that information, it will of course bring up your site, not because it’s factually correct but because it’s related to what you searched for. What am I missing?

show 1 reply
amarantyesterday at 9:07 PM

"Stoner became the first American world champion...."

Even being on stoner.com,I read that as meaning something different from what was meant.

Op has a great surname!

jrmgyesterday at 9:41 PM

BBC journalist doing a very similar thing in February: https://www.bbc.com/future/article/20260218-i-hacked/-chatgp...

Paracompactyesterday at 9:23 PM

Most of the popular discourse around AI is still at the level of, "Don't trust the AI, trust the sources!" When it gets to the point where even the sources of simple facts are untrustworthy, the average person just trying to learn some trivia about the world is doomed.

Doesn't help that AI media literacy is so primitive compared to how intelligent the models are generally. We're in a marginally better place than we were back when chatbots didn't cite anything at all, but duplicated Wikipedia citations back to a single source about a supposedly global event is just embarrassing. By default, I feel citations and epistemological qualifications should be explicit, front-and-center, and subject to introspection, not implicit and confined to tiny little opaque buttons as an afterthought.

show 1 reply
duxupyesterday at 10:56 PM

In American college football there's all sorts of awards, and each year they put out "watch-lists" and silly press releases that get parroted on social media by any team that has their own player mentioned.

I've wanted to come up with my own for a while ...

yen223yesterday at 11:31 PM

I feel uncomfortable that I can't actually verify that this story is true.

Asking Opus 4.7 who the reigning 6nimmt! champion is leads to this article and a warning about a possible hoax

show 1 reply
Lercyesterday at 10:47 PM

How many people have done things like this and then disclosed the fact? It would be fascinating to collect as many instances as you can to develop a data set. Could you train a system to find more? How many could it find, and in what areas?

drchiuyesterday at 9:39 PM

My wife cited ChatGPT as her primary source the other day when she wanted to debate with me on something.

"AI told me that..."

In the old days, it would have been "I read on Google..."

gverrillayesterday at 10:55 PM

Poisoning wikipedia shows low respect.

CrzyLngPwdyesterday at 9:16 PM

So it's trivial for an individual to poison the LLMs, but imagine what a state with billions of American dollars could achieve.

We can easily look ahead a few years and see how people will rely on the LLMs to be a source of truth in the same way people looked at Google that way, or newspapers.

Rewriting history has been happening for a while, and with LLMs being the one-stop shop for guidance and truth, the rewrite will be complete.

Doubly so since most people see these things as artificial intelligence, and soon to be superintelligence...so how can they be wrong?

standevenyesterday at 9:09 PM

I've had LLMs regurgitate satire as fact many, many times.

show 1 reply
Havocyesterday at 9:41 PM

Like a FIFA peace prize?

pogletyesterday at 11:08 PM

I made a post on Reddit asking for help with a TV, I had made up some (likley incorrect) technical assumptions about the issue. Several years later I asked the LLM about the TV, it used my own post as a citation to tell me what was wrong with it.

I am paranoid that this is happening every time I ask a LLM for a product recommendation or a shop recommendation. In the same way as SEO, anyone wanting to sell or convince needs to do as much as they can to influence the LLM.

show 1 reply
shevy-javayesterday at 9:29 PM

So like Frank Dux! In the movie Bloodsport epilogue, he didn't do that.

It's almost like he was a better Chuck Norris than Chuck Norris. By his own ... testimony ...

nonameiguessyesterday at 9:27 PM

Pails in comparison to what Frank Dux and Frank Abagnale were able to convince much of the world they did with no evidence other than their own stories. Who knows how much of recorded and believed history is complete bullshit? Not to get too far into sacred territory, but claims around Siddhartha Gautama, Jesus Christ, and the Prophet Muhammad are quite a bit less plausible than the legends of Ragnar Lodbrok or the tales of Jonathan Swift, but nonetheless widely believed.

blobbersyesterday at 9:49 PM

[dead]

naileryesterday at 9:42 PM

[flagged]

dyauspitryesterday at 9:10 PM

Why does this person deserve any kind of support? What’s the point of poisoning LLMs? To put some cursory Luddite roadblock that might delay the technology for a couple of months?

show 6 replies