> who do we hold to account when a model makes a mistake?
First we stop anthromorphising the program as capable of making a "mistake". We recognise it merely as machine providing incorrect output, so we see the only mistake was made by the human who chose to rely upon it.
The courts so far agree. Judges are punishing the gulled lawyers rather than their faux-intelligent tools.