“Decision processes can be analyzed and reproduced by a typical citizen without burdensome preconditions” is a nice simple way to put it. Neural network training is not accessible to a typical citizen (one that you might find on a jury) without burdensome effort involving terabytes of input data and hundreds of thousands of dollars, and a black-box pre-trained network does not satisfy the terms of replicable as it cannot be interpreted by analysis. Techies will object that ‘burdensome’ is poorly defined, but it serves to concentrate the subjective judgement into a measurable test that can be evaluated and justified by the judiciary; I expect that a judge would not find “download and execute an AI” to pass that test, but you could always explicitly analysis to be possible in a reasonable length of time without a computer. Similarly, language regarding ‘typical citizens’ is already well-known and understood in the field.
This is all moot if no one asks for it, though :) The exact wording of the deck chairs has no bearing on the course of the ship and all.
I'd settle for "interpretable".
Because frankly, I don't think the average citizen can understand a lot of even basic algorithms used in data analysis. I teach undergrads and I can say with high certainty that even many senior CS students have difficulties understanding them. There's PhDs in other STEM feilds that have issues (you can have a PhD in a STEM field without having a strong math background. Which "strong" means very different things to different people. For some Calculus means strong while others PDEs isn't sufficient)
Why I'd settle for interpretable is since someone could understand and explain it. There needs to be some way we can verify bias, and not just by means of black box probing. While black box probing allows to detect bias it only allows us to do so by sampling and requires us to sample just right. Black box probing makes it impossible to show non-bias.
What I want is __causal__ explanations. And I think that's what everyone wants. And causal explanations can be given even with non-deterministic algorithms.