All of those are false equivalences. Let me give you a few better analogies.
Selling an axe that's known to be so defective that it breaks upon use and impales anybody nearby. Even worse, it is sold as great for axe murders.
Or a big tech company like Microsoft selling a software for planning a mass murder, including indoctrination material and the checklists of things to be done.
Or an auto company like Toyota selling a car that is known to accelerate uncontrollably at inopportune moments and advertising it as great for hit and run campaigns.
Now let's consider a few relevant examples.
An AI model sold for planning military attacks, knowing that it sometimes selects completely innocent targets.
Or an AI model sold to families, claiming that it's safe. Meanwhile, it discreetly encourages the teenage son to commit suicide.
Or selling a financial trading AI that's known to make disastrous decisions at times.
Or selling a 'self driving' car, knowing that its autopilot frequently makes fatal mistakes.
I know that I'm supposed to assume good intentions and not make any accusations on HN. Therefore let me make this rather obvious observation. Some people here are dismal failures at making arguments that are consistent and free of logical fallacies - especially when it comes to questionable practices by the bigtech.