This is not using AI to “assist in writing your articles”. This is using AI to report your articles, and then passing it off as your own research and analysis.
This is straight up plagiarism, and if the allegations are true, the reporters deserve what they would get if it were traditional plagiarism: immediate firings.
Yeah, I have been extremely pro-AI and have been for decades, and I use LLMs daily, but this is not an acceptable use of an LLM. Especially since it's fabricating quotes, so there's the plagiarism issue and then the veracity issue. And it's doing this to report on an incident of someone being bizarrely accosted by LLMs. Just such a ridiculous situation all around.
Absolutely inevitable if you condone using GAI to ‘assist’ in writing. The inevitable outcome is reporters just writing prompts and giving it a quick once over, then skipping the last step because they believe the companies selling generative AI and/or are under time pressure and it seems good enough.
They are word generators. That is their function, so if you use them words will be generated that are not yours and which are sometimes nonsense and made up.
The problem here was not plagiarism but generated falsehoods.
I thought it was very obvious AI is doing almost everything of most of the news outlets these days. Especially the ones that only ever had an online presence.
Not just the reporter, anyone who had eyes on it before it was published. And whoever is responsible for setting the culture that allowed this to happen.
> This is straight up plagiarism
More likely libel.
> the reporters deserve what they would get if it were traditional plagiarism: immediate firings.
I don't give a fuck who gets fired when I have been publicly defamed. I care about being compensated for damages caused to me. If a tow truck company backed into my house I would be much less concerned about the internal workings of some random tow truck company than I would be ensuring my house was repaired.