Because "0" often means "show none", so when dividing by 0, I'm fine "showing none".
I'm sure it doesn't work for everybody, but I never had a specific need to deal with zero in the division that didn't result with "actually let's count it as 0"
There are several domains where 0 is different from none, for example, most computations involving rates.
Imagine that you are running some A/B tests and you want to track conversions. If one of the experiments received 10 users and had 5 conversions, you want to show 50%. If it received 10 users and had no conversions, you will show 0%.
However, if it has received 0 users, while you could show zero conversions, the correct answer is to say that you don't know the conversion rate. Because maybe, if you had had 10 users, they could have all converted, and the rate would be 100%. You simply don't know.
Same logic applies over computing ROI, interest, velocity, etc.
There are several domains where 0 is different from none, for example, most computations involving rates.
Imagine that you are running some A/B tests and you want to track conversions. If one of the experiments received 10 users and had 5 conversions, you want to show 50%. If it received 10 users and had no conversions, you will show 0%.
However, if it has received 0 users, while you could show zero conversions, the correct answer is to say that you don't know the conversion rate. Because maybe, if you had had 10 users, they could have all converted, and the rate would be 100%. You simply don't know.
Same logic applies over computing ROI, interest, velocity, etc.