Regardless, "AI" is not doing the killing in that case. Rather, humans have deployed it to control weapons that kill people. There are several layers of indirection there before you can claim "AI kills people". This is the same indirection as when a human chooses to press a button that fires a missile, or stab someone, just with more steps involved.
So you can also be outraged at weapon manufacturers, which is one step closer. Or, you can skip the indirection, and be outraged specifically at people in charge of using this technology, which is my point.
I'm disgusted by this industry as much as you are, believe me. But blaming the companies that produce "AI" for people dying is misplaced. They're certainly part of the problem, but not the root cause.
> AI needs to be opposed
AI doesn't exist. It is a marketing term used by grifters to sell their snake oil.
But even if it did, it's silly to claim that any technology needs to be opposed. This one is potentially more problematic than others because it raises some difficult existential and social questions which we might not be ready to answer, but it's still ultimately on us to control how it's used. We've somehow been able to do this for nuclear weapons which can literally obliterate civilization at the press of a button, so a probabilistic pattern generator seems trivial in comparison. It's going to be bumpy, but I think we'll manage.
> AI doesn't exist. It is a marketing term used by grifters to sell their snake oil.
They've claimed the term, this is not a useful objection to make at this point. And everyone was fine with calling our shitty little computer vision handwriting parsers "AI algorithms" before LLMs.
> We've somehow been able to do this for nuclear weapons which can literally obliterate civilization at the press of a button
Knowing what you know about nuclear weapons, if you ran into the Manhattan Project scientists, would you still be cheering them on? "Thanks guys, our democracies are so stable these will literally never be used for a nuclear holocaust, and they might have useful mining applications!"
Can you not think of any exceptionally nasty things the US government could do with the "machines that act as if they can think for most practical purposes"? Do you think maybe it might be a good idea to develop that technology after you have made sure that the government serves the peoples interest?