Ars Technica being caught using LLMs that hallucinated quotes by the author and then publishing them in their coverage about this is quite ironic here.
Even on a forum where I saw the original article by this author posted someone used an LLM to summarize the piece without having read it fully themselves.
How many levels of outsourcing thinking is occurring to where it becomes a game of telephone.
Aurich Lawson (creative director at Ars) posted a comment[0] in response to a thread about what happened, the article has been pulled and they'll follow-up next week.
[0]: https://arstechnica.com/civis/threads/journalistic-standards...
Yikes I subscribed to them last year on the strength of their reporting in a time where it's hard to find good information.
Printing hallucinated quotes is a huge shock to their credibility, AI or not. Their credibility was already building up after one of their long time contributors, a complete troll of a person that was a poison on their forums, went to prison for either pedophilia or soliciting sex from a minor.
Some serious poor character judgement is going on over there. With all their fantastic reporters I hope the editors explain this carefully.
The amount of effort to click an LLM’s sources is, what, 20 seconds? Was a human in the loop for sourcing that article at all?
Incredible. When Ars pull an article and its comments, they wipe the public XenForo forum thread too, but Scott's post there was archived. Username scottshambaugh:
https://web.archive.org/web/20260213211721/https://arstechni...
>Scott Shambaugh here. None of the quotes you attribute to me in the second half of the article are accurate, and do not exist at the source you link. It appears that they themselves are AI hallucinations. The irony here is fantastic.
Instead of cross-checking the fake quotes against the source material, some proud Ars Subscriptors proceed to defend Condé Nast by accusing Scott of being a bot and/or fake account.
EDIT: Page 2 of the forum thread is archived too. This poster spoke too soon:
>Obviously this is massive breach of trust if true and I will likely end my pro sub if this isnt handled well but to the credit of ARS, having this comment section at all is what allows something like this to surface. So kudos on keeping this chat around.
> How many levels of outsourcing thinking is occurring to where it becomes a game of telephone
How do you know quantum physics is real? Or radio waves? Or just health advice? We don't. We outsource our thinking around it to someone we trust, because thinking about everything to its root source would leave us paralyzed.
Most people seem to have never thought about the nature of truth and reality, and AI is giving them a wake-up call. Not to worry though. In 10 years everyone will take all this for granted, the way they take all the rest of the insanity of reality for granted.
Has it been shown or admitted that the quotes were hallucinations, or is it the presumption that all made up content is a hallucination now?
More than ironic, it's truly outrageous, especially given the site's recent propensity for negativity towards AI. They've been caught red-handed here doing the very things they routinely criticize others for.
The right thing to do would be a mea-culpa style post and explain what went wrong, but I suspect the article will simply remain taken down and Ars will pretend this never happened.
I loved Ars in the early years, but I'd argue since the Conde Nast acquisition in 2008 the site has been a shadow of its former self for a long time, trading on a formerly trusted brand name that recent iterations simply don't live up to anymore.
Honestly frustrating that Scott chose not to name and shame the authors. Liability is the only thing that's going to stop this kind of ugly shit.
Ars Technica has always trash even before LLMs and is mostly an advertisement hub for the highest bidder
Also ironic: When the same professionals advocating "don't look at the code anymore" and "it's just the next level of abstraction" respond with outrage to a journalist giving them an unchecked article.
Read through the comments here and mentally replace "journalist" with "developer" and wonder about the standards and expectations in play.
Food for thought on whether the users who rely on our software might feel similarly.
There's many places to take this line of thinking to, e.g. one argument would be "well, we pay journalists precisely because we expect them to check" or "in engineering we have test-suites and can test deterministically", but I'm not sure if any of them hold up. The "the market pays for the checking" might also be true for developers reviewing AI code at some point, and those test-suites increasingly get vibed and only checked empirically, too.
Super interesting to compare.