If someone is presenting themselves to you in person for entry into your bar, you have far more information to make a judgement on than the color of their skin... so it is not the same.
In the case of a woman coming into contact with some driver and volunteering location information like her home address, she has little to no information to make that judgement. Providing her just that bit of information, and allowing her to discriminate based on it, makes her safer. Ideally, she'd have way more information than just whether the driver is male or female. The reputation information helps, but isn't always reliable.
When you employ someone as a driver, you also have far more information, even more than when you determine whether to let someone in a bar.
>If someone is presenting themselves to you in person for entry into your bar, you have far more information to make a judgement on than the color of their skin... so it is not the same.
So the difference between "good" discrimination and "bad" discrimination is the amount of information on which the decision is based upon?
Logically then uber could add a "white only" option, "no queer" and "no leftist". (of course this is arbitrary but you can easily come up with a reason why: if you split any group of real people in two it's only natural that one group has an higher incidence of a negative trait)
This also has a second problem: what if we let the passenger know not only the sex but also if the driver ate fish in the morning (and hundreds of other useless facts)? Does that make it discrimination because they have far more information?
I guess not but then how do you decide what information is valuable in order to decide if there is enough information to judge the individual instead of going off statistics? How can you say that our theoretical racist patron is in fact racist and not going off the only valuable information?