People put a lot of weight on blame-free post-mortems and not punishing people who make "mistakes", but I believe that has to stop at the level of malice. Falsifying quotes is malice. Fire the malicious party or everything else you say is worthless.
Yes. This is being treated as thought it were a mistake, and oh, humans make mistakes! But it was no mistake. Possibly it was a mistake on the part of whoever was responsible for reviewing the article before publication didn't catch it. But plagiariasm and fabrication require malicious intent, and the authors responsible engaged in both.
I'm curious if you've read the author's Bluesky statement (which wasn't available when you made your comment) and what you think of it?
There’s no malice if there was no intention of falsifying quotes. Using a flawed tool doesn’t count as intention.
At this point anyone reporting on tech should know the problems with AI. As such even if AI is used for research and articles are written on that output by human there is still absolute unquestionable expectation to do the standard manual verification of facts. Not doing it is pure malice.
I don’t see how you could know that without more information. Using an AI tool doesn’t imply that they thought it would make up quotes. It might just be careless.
Assuming malice without investigating is itself careless.
That don't actually say it's a blame free post-mortem, nor is it worded as such. They do say it's their policy not to publish AI generated anything unless specifically labelled. So the assumption would be that someone didn't follow policy and there will be repercussions.
The problem is people on the Internet, hn included, always howl for maximalist repercussions every time. ie someone should be fired. I don't see that as a healthy or proportionate response, I hope they just reinforce that policy and everyone keeps their jobs and learns a little.