In fairness, a well designed and tested weapon at least can be expected to reliably and consistently perform the same thing each time. We also understand deeply how they work and can easily investigate if something happens whether it was user error, a defect or design issue. LLMs, not so much.