I really wish BrandShield didn't use AI as a marketing term. It just looks like it's doing a generic ctrl-F on webpages?
Then things like this happen, and people think "ooh AI is bad, the bubble must burst" when this has nothing to do with that in the first place, and the real issue was that they sent a "fraud/phishing report" rather than a "trademark infringement" report.
Then I also wish that people who knew better, that this really has nothing to do with AI (like, this is obviously not autonomously making decisions any more than a regular program is), to stop blindly parroting and blaming it as a way to get more clicks, support and rage.
> and people think "ooh AI is bad, the bubble must burst" when this has nothing to do with that in the first place
That haphazard branding and parroting is exactly why the bubble needs to burst. Bubbles bursting take out the gritters and rarely actually kills off all the innovation in the scene (it kills a lot, though. I'm not trying to dismiss that).
It's possible they were using LLMs (or even just traditional ML algorithms) to choose if a certain webpage was fraud/phishing instead of mere trademark infringement, though. In this case it makes sense that one would be angry that a sapient being didn't first check if the report was accurate before sending it off.
When AI is being used as a cover for the bad/questionable behavior the company was already doing then there is no bubble to burst. The performance of the "AI" doesn't matter, only that it throws up a smoke shield in front of the company when people call to complain about the abuse.
I fear that ship has already sailed. I think the grifters and scammers have already abused the term enough that even decent uses of it are now tainted. I know that the two aren't strictly the same, but I would suggest using "Machine Learning" instead, which I think has more respectable connotations.
I mean, whether this has anything to do with AI or not (I’d buy that they’re using LLMs to write abuse letters or similar) it fits very nicely into the general pattern of AI breaking the internet through an endless deluge of worthless misleading spam. So, perhaps call it honorary AI?
I find that businesses that bill themselves as ${TOOL}-users instead of ${PROBLEM}-solvers are, as a general rule, problematic. I couldn't possibly care any less whether a product is built on AI or a clever switch statement or a bazillion little gnomes doing the work by hand. I care that it solves a problem.
AI does need to die. Not so much because LLMs are bad, but rather because, like "big data" and "blockchain" and many other buzzwordy tools before it, it is a solution looking for a problem.