I'd settle for "interpretable".
Because frankly, I don't think the average citizen can understand a lot of even basic algorithms used in data analysis. I teach undergrads and I can say with high certainty that even many senior CS students have difficulties understanding them. There's PhDs in other STEM feilds that have issues (you can have a PhD in a STEM field without having a strong math background. Which "strong" means very different things to different people. For some Calculus means strong while others PDEs isn't sufficient)
Why I'd settle for interpretable is since someone could understand and explain it. There needs to be some way we can verify bias, and not just by means of black box probing. While black box probing allows to detect bias it only allows us to do so by sampling and requires us to sample just right. Black box probing makes it impossible to show non-bias.
What I want is __causal__ explanations. And I think that's what everyone wants. And causal explanations can be given even with non-deterministic algorithms.