It's possible they were using LLMs (or even just traditional ML algorithms) to choose if a certain webpage was fraud/phishing instead of mere trademark infringement, though. In this case it makes sense that one would be angry that a sapient being didn't first check if the report was accurate before sending it off.
More than the hypothetical risk of Earth being consumed by a paperclip-making machine, I believe the real and present danger in the use of ML and AI technology lies in humans making irresponsible decisions about where and how to apply these technologies.
For example, in my country, we are still dealing with the fallout from a decision made over a decade ago by the Tax Department. They used a poorly designed ML algorithm to screen applicants claiming social benefits for fraudulent activity. This led to several public inquiries and even contributed to the collapse of a government coalition. Tens of thousands of people are still suffering from being wrongly labeled as fraudulent, facing hefty fines and being forced to repay so-called fraudulent benefits.