I really hope this stays up, despite the politics involvement to a degree. I think this is a situation that is a perfect example of how AI hallucinations/lack of accuracy could significantly impact our lives going forward. A very nuanced and serious topic with lots of back and forth being distilled down to headlines by any source, it is a terrifying reality. Especially if we aren't able to communicate how these tools work to the public. (if they even will care to learn it) At least when humans did this they knew at some level at least they skimmed the information on the person/topic.
One of the arguments used to justify the mass-ingestion of copyrighted content to build these models is that the resulting model is a transformative work, and thus fair use.
If this is indeed true, it seems like Google et al must be liable for output like this according to their own argument, i.e. if the work is transformative, they can’t claim someone else is liable.
These companies can’t have their cake and eat it too. It’ll be interesting to see how this plays out.
Google should be held liable for this. They are the ones who published and hosted this. And should be accountable for every bit of libel they publish.
This story will probably become big enough to drown out the fake video and the AI (which is presumably being fed top n search results) will automatically describe this fake video controversy instead...
>a perfect example of how AI hallucinations/lack of accuracy could significantly impact our lives going forward.
how about stop forming judgments of people based on their stance on Israel/Hamas, and stop hanging around people who do, and you'll be fine. if somebody misstates your opinion, it won't matter.
probably you'll have to drop bluesky and parts of HN (like this political discussion that you urge be left up) but that's necessary because all legitimate opinions about Israel/Hamas are very misinformed/cherry picked, and AI is just flipping a coin which is just as good as an illegitimate opinion.
(if anybody would like to convince me that they are well informed on these topics, i'm all ears, but doing it here is imho a bad idea so it's on you if you try)
Has anyone independently confirmed the accuracy of his claim?
I think this is a situation that is a perfect example of how AI hallucinations/lack of accuracy could significantly impact our lives going forward.
This has been a Google problem for decades.
I used to run a real estate forum. Someone once wrote a message along the lines of "Joe is a really great real estate agent, but Frank is a total scumbag. Stole all my money."
When people would Google Joe, my forum was the first result. And the snippet Google made from the content was "Joe... is a total scumbag. Stole all my money."
I found out about it when Joe lawyered up. That was a fun six months.
[dead]
I've had multiple people copy and paste AI conversations and results in GitHub issues, emails, etc., and there are I think a growing number of people who blindly trust the results of any of these models... including the 'results summary' posted at the top of Google search results.
Almost every summary I have read through contains at least one glaring mistake, but if it's something I know nothing about, I could see how easy it would be to just trust it, since 95% of it seems true/accurate.
Trust, but verify is all the more relevant today. Except I would discount the trust, even.