Searching for "benn jordan isreal", the first result for me is a video[0] from a different creator, with the exact same title and date. There is no mentioning of "benn" in the video, but some mentioning of jordan (the country). So maybe, this was enough for Google to hallucinate some connection. Highly concerning!
Most people Google things they're unfamiliar with, and whatever the AI Overview generates will seem reasonable to someone who doesn't know better. But they are wrong a lot.
It's not just the occasional major miss, like this submission's example, or the recommendation to put glue on a pizza. I highly recommend Googling a few specific topics you know well. Read each overview entirely and see how many often it gets something wrong. For me, only 1 of 5 overviews didn't have at least 1 significant error. The plural of "anecdote" is not "data," but it was enough for me to install a Firefox extension that blocks them.
From the ai hallucinations
> Video and trip to Israel On August 18, 2025, Benn Jordan uploaded a YouTube video titled / Was Wrong About Israel: What I Learned on the Ground, which detailed his recent trip to Israel.
This sounds like the recent Ryan Macbeth video https://youtu.be/qgUzVZiint0?si=D-gJ_Jc9gDTHT6f4. I believe the title is the same. Scary how it just misattributed the video.
Reading this I assumed it was down to the AI confusing two different Benn Jordans, but nope, the guy who actually published that video is called Ryan McBeth. How does that even happen?
It's not Google's fault. The 6pt text at the bottom clearly says:
"AI responses may include mistakes. Learn more"
I've shared this example in another thread, but it fits here too. Few weeks ago, I talked to a small business owner who found out that Google's AI is telling users his company is a scam, based on totally unrelated information where a different, similarly named brand is mentioned.
We actually win customers who's primarily goal is getting AI to stop badmouthing them.
The law needs to stand up and make an example here, otherwise this will just continue and at some point a real disaster will occur due to AI.
I adore Benn Jordan, a refreshing voice in music and tech. I hope Google pay him a public apology. Ultimately, this is exactly how innocent, private people will have their reputations and lives wrecked by unregulated public-facing LLM text generation
I approach this from a technical perspective, and have research that shows how Google is unfit for summaries based on their short snippet length in their results [1].
Google also has to support AI summaries for 200k to 500k queries per second. To use a model that is good enough to prevent hallucinations would be too expensive - so they use a bad model since it’s fast and cheap.
Google also loses click through ad revenue when presenting a summary.
All of these factors considered while Google opting for summaries is an absolutely disastrous product decision.
Why must humans be responsible in court for the biological neural networks they possess and operate but corporations should not be responsible for the software neural networks they possess and operate?
The year is 2032. One of the big tech giants has introduced Employ AI, the premiere AI tool for combating fraud and helping recruiters sift through thousands of job applications. It is now used in over 70% of HR departments, for nearly all salaried positions, from senior developers to minimum wage workers.
You apply for a job, using your standardized Employ resume that you filled out. It comes bundled with your Employ ID, issued by the company to keep track of which applications have been submitted by specifically you.
When Employ AI does its internet background check on you, it discovers an article about a horrific attack. Seven dead, twenty-six injured. The article lists no name for the suspect, but it does have an expert chime in, one that happens to share their last name with you. Your first name also happens to pop up somewhere in the article.
With complete confidence that this is about you, Employ AI adds the article to its reference list. It condenses everything into a one-line summary: "Applicant is a murderer, unlikely to promote team values and social cohesion. Qualifications include..." After looking at your summary for 0.65 seconds, the recruiter rejects your application. Thanks to your Employ ID, this article has now been stapled to every application you'll ever submit through the system.
You've been nearly blacklisted from working. For some reason, all of your applications never go past the initial screening. You can't even know about the existence of the article, no one will tell you this information. And even if you find out, what are you going to do about it? The company will never hear your pleas, they are too big to ever care about someone like you, they are not in the business of making exceptions. And legally speaking, it's technically not the software making final screening decisions, and it does say its summaries are experimental and might be inaccurate in 8pt light gray text on a white background. You are an acceptable loss, as statistically <1% of applicants find themselves in this situation.
It's important to take screen shots from websites with a grain of salt, since anyone with basic web development knowledge can edit the HTML and write whatever he/she wants. Not saying this didn't happen though, I'm sure it did.
Google has been a hot mess for me lately. Ya, the AI is awful, numerous times I’m shown information that’s either inaccurate or straight false. It will summarize my emails wrong, it will mess up easy facts like what time my dinner reservation is. Worst is the overall search UX, especially auto complete. Suggestions are never right and then trying to tap and navigate thru always leads to an mis-click.
Everyone was dumping on google when OpenAI first launched ChatGPT for playing it too safe and falling behind on cool new tech. Now everyone's upset LLMs are hallucinating and say they shouldn't launch things until proven safe.
I am very curious if California's consumer rights to data deletion and correction are going to apply to the LLM model providers.
"AI Responses May Include Mistakes": https://news.ycombinator.com/item?id=44142113
IMHO the more people get trained to automatically ignore the "AI summary", just like many have conditioned to do the same to ads, the better.
About a month ago, we had this thread: https://news.ycombinator.com/item?id=44615801
Dave Barry is pretty much A-list famous.
I asked Meta Raybans about me, and they said I died last September.
You should be able to sue Google for libel for this and disclaimers on AI accuracy in their fine print should not matter. It's obvious that too many people don't care about these to make these rumors reach critical mass and become self sustaining.
I wish I could build the speech jammer, his coolest project. I also am an adult and understand why I can't have one.
Ryan McBeth glows so bright, his videos should only be viewed with the aid of a welding mask. His entire online presence seems seems to circle the theme of promoting military enlistment, tacitly when not explicitly.
Very bizarre that Benn Jordan somehow got roped into it.
Yikes, as expected people have started to take google AI summary as fact without doing any more research.
We all knew this would happen but I imagine all hoped anyone finding something shocking there would look further into it.
Of course with the current state of searching and laziness (not being rewarded by dopamine for every informative search vs big dopamine hits if you just make your mind up and continue scrolling the endless feed)
Google is not posting "snippets" or acting as a portal to web content, its generating new content now, so I would assume they they would no longer have section 230 protections and be open to defamation suits.
"Section 230 of the Communications Decency Act, which grants immunity to platforms for content created by third parties. This means Google is not considered the publisher of the content it indexes and displays, making it difficult to hold the company liable for defamatory statements found in search results"
There was a post on HN the other day where someone was launching an email assistant that used AI to summarise emails that you received. The idea didn't excite me, it scared me.
I really wish the tech industry would stop rushing out unreliable misinformation generators like this without regard for the risks.
Google's "AI summaries" are going to get someone killed one day. Especially with regards to sensitive topics, it's basically an autonomous agent that automates the otherwise time-consuming process of defamation.
In an ideal world, a product that can be harmful is tested privately until there is a reasonable amount of safety in using that product. With AI, it seems like that protocol has been completely discarded in favor of smoke-testing it on the public and damn the consequences.
Of course, investors are throwing so much money at AI and AI is, in turn, buying legislators and heads of government, who are bound and determined to shield them from liability, so …
We are so screwed.
the "AI" bullshitters need to be liable for this type of wilful defamation
and it is wilful, they know full well it has no concept of truthfulness, yet they serve up its slop output directly into the faces of billions of people
and if this makes "AI" nonviable as a business? tough shit
Could this feasibly be a case of generative AI used by bad actors to make it look like the model simply hallucinated once discovered. Ofc by then the damage is done. Like a form of trolling.
The weaponization of "AI mistakes" - oops, don't take that seriously, everyone knows AI makes mistakes. Okay, yeah, it's a 24 pt headline with incorrect information, it's okay because it's AI.
Integrity is dead. Reliable journalism is dead.
One has to wonder if one of the main innovations driving "AI" is the complete lack of accountability and even shame.
Twenty years ago, we wouldn't have had companies framing the raw output of a text generator as some kind of complete product, especially an all-encompassing general one. How do you know that these probabilistic text generators are performing valid synthesis, as opposed to word salad? You don't. So LLM technology would have used to do things like augment search/retrieval, pointing to concrete sources and excerpts. Or to analyze a problem using math, drive formal models that might miss the mark but at least wouldn't be blatantly incorrect with a convincing narrative. Some actual vision of an opinionated product that wasn't just dumping the output and calling it a day.
Also twenty years ago we also wouldn't have had a company placing a new beta-quality product (at best) front and center as a replacement for their already wildly successful product. But it feels like the real knack of these probabilistic word generators is convincing "product people" of their supreme utility. Of course they're worried - they found something that can bullshit better than themselves.
At any rate all of those discussions about whether humans are be capable of keeping a superintelligent AI "boxed" are laughable in retrospect. We're propping open the doors and chumming other humans' lives as chunks of raw meat, trying to coax it out.
(Definitely starting to feel like an old man here. But I've been yelling at Cloud for years so I guess that tracks)
Not the first misattribution by an AI
https://theconversation.com/why-microsofts-copilot-ai-falsel...
Definitely not the last.
Your daily reminder that AI hallucination is a feature, not a bug.
See also "ChatGPT is bullshit"
https://link.springer.com/article/10.1007/s10676-024-09775-5
Can anyone independently confirm this guy's story?
His posts are mostly political rage bait and he actively tries to data poison AI.
He also claims that Hitler compares favorably to Trump. Given his seeming desire to let us all know how much he dislikes Israel, that's a pretty... interesting... claim.
Just because he's an unreliable source doesn't mean his story is false. But it would be nice to have confirmation before taking it seriously.
This is fine though because if you expand the AI Overview and scroll to the end and put on your reading glasses there's a teeny tiny line of text that says "AI responses may include mistake". So billion dollar misinformation machines can say whatever they want about you.
[dead]
It's fine, it's okay. It's not like these funky LLMs will be used in any critical capacity in our lives like deciding if we make it through the first step of a job application or if we're saying anything nefarious on our government monitored chats. Or approving novel pharmaceuticals, or deciding which grant proposals to accept or deciding which government workers aren't important and can be safely laid off! /s
[dead]
[flagged]
[flagged]
AI makes stuff up, film at 11. It's literally a language model. It's just guessing what word follows another in a text, that's all it does. How's this different from the earlier incidents where that same Google AI would suggest that you should put glue on your pizza or eat rocks as a tasty snack?
Can we stop conflating LLM models with the companies that created them? It's "…Gemini made up…". Do we not value accuracy? It'd be a whole different story if a human defamed you, rather than a token predictor.
GPT-4 is about 45 gigabytes. https://dumps.wikimedia.org/other/kiwix/zim/wikipedia/wikipe... , a recent dump of the English wikipedia, is over twice that, and that's just English. Plus AIs are expected to know about other languages, science, who even knows how much Reddit, etc.
There literally isn't room for them to know everything about everyone when they're just asked about random people without consulting sources, and even when consulting sources it's still pretty easy for them to come in with extremely wrong priors. The world is very large.
You have to be very careful about these "on the edge" sorts of queries, it's where the hallucination will be maximized.
I really hope this stays up, despite the politics involvement to a degree. I think this is a situation that is a perfect example of how AI hallucinations/lack of accuracy could significantly impact our lives going forward. A very nuanced and serious topic with lots of back and forth being distilled down to headlines by any source, it is a terrifying reality. Especially if we aren't able to communicate how these tools work to the public. (if they even will care to learn it) At least when humans did this they knew at some level at least they skimmed the information on the person/topic.