logoalt Hacker News

coldpietoday at 1:15 PM3 repliesview on HN

I would like to give a small defense of Benj Edwards. While his coverage on Ars definitely has a positive spin on AI, his comments on social media are much less fawning. Ars is a tech-forward publication, and it is owned by a major corporation. Major corporations have declared LLMs to be the best thing since breathable air, and anyone who pushes back on this view is explicitly threatened with economic destitution via the euphemism "left behind." There's not a lot of paying journalism jobs out there, and people gotta eat, hence the perhaps more positive spin on AI from this author than is justified.

All that said, this article may get me to cancel the Ars subscription that I started in 2010. I've always thought Ars was one of the better tech news publications out there, often publishing critical & informative pieces. They make mistakes, no one is perfect, but this article goes beyond bad journalism into actively creating new misinformation and publishing it as fact on a major website. This is actively harmful behavior and I will not pay for it.

Taking it down is the absolute bare minimum, but if they want me to continue to support them, they need to publish a full explanation of what happened. Who used the tool to generate the false quotes? Was it Benj, Kyle, or some unnamed editor? Why didn't that person verify the information coming out of the tool that is famous for generating false information? How are they going to verify information coming out of the tool in the future? Which previous articles used the tool, and what is their plan to retroactively verify those articles?

I don't really expect them to have any accountability here. Admitting AI is imperfect would result in being "left behind," after all. So I'll probably be canceling my subscription at my next renewal. But maybe they'll surprise me and own up to their responsibility here.

This is also a perfect demonstration of how these AI tools are not ready for prime time, despite what the boosters say. Think about how hard it is for developers to get good quality code out of these things, and we have objective ways to measure correctness. Now imagine how incredibly low quality the journalism we will get from these tools is. In journalism correctness is much less black-and-white and much harder to verify. LLMs are a wildly inappropriate tool for journalists to be using.


Replies

the8472today at 3:24 PM

Looks they're gonna investigate and perhaps post something next week. https://arstechnica.com/civis/threads/journalistic-standards...

show 1 reply
phyzometoday at 2:39 PM

I believe you can go ahead and cancel your subscription now and it will only take effect at the next renewal point.

That helps ensure you don't forget, and sends the signal more immediately.

show 1 reply
actinium226today at 2:05 PM

Kind of funny that the people trusting AI too much appear to be the ones who will be left behind.