That is a huge danger. Legally speaking it's not an issue since misusing a tool doesn't relieve liability (in most circumstances - all the trivial ones at least)... but that's a more significant political issue as evidenced by the Anthropic vs. DoD interactions since the DoD's actions are largely immune to oversight by the justice system.
Of course, that depends on sane non-politicized courts which you may rightfully doubt exist right now - but assuming the system works anywhere near as designed outsourcing a decision to AI wouldn't change liability.
For DC fans Harvey Dent would similarly not be free from liability for actions taken after a coin flip even if that coin could be viewed in a certain light to have the power to potentially force or prevent certain actions. An AI box that tells Harvey whether to shoot or spare would be similarly irrelevant to his liability - and a scenario in which Harvey points the gun at someone and then walks away giving the AI control over the trigger is essentially no different. Harvey in all cases is responsible for constructing the scenario that (potentially) leads to someone's death and, more over, even if the gun wasn't fired because the AI decided to spare the person Harvey would be on the hook for attempted murder.
That is a huge danger. Legally speaking it's not an issue since misusing a tool doesn't relieve liability (in most circumstances - all the trivial ones at least)... but that's a more significant political issue as evidenced by the Anthropic vs. DoD interactions since the DoD's actions are largely immune to oversight by the justice system.
Of course, that depends on sane non-politicized courts which you may rightfully doubt exist right now - but assuming the system works anywhere near as designed outsourcing a decision to AI wouldn't change liability.
For DC fans Harvey Dent would similarly not be free from liability for actions taken after a coin flip even if that coin could be viewed in a certain light to have the power to potentially force or prevent certain actions. An AI box that tells Harvey whether to shoot or spare would be similarly irrelevant to his liability - and a scenario in which Harvey points the gun at someone and then walks away giving the AI control over the trigger is essentially no different. Harvey in all cases is responsible for constructing the scenario that (potentially) leads to someone's death and, more over, even if the gun wasn't fired because the AI decided to spare the person Harvey would be on the hook for attempted murder.