In most languages, `x: float = 0` involves an implicit conversion from int to float. In Python, type annotations have no impact on runtime behavior, so even though the type checker accepts this code, `type(x)` will be `int` -- python acts as if `int` was a subtype of `float`.
It would be weird if the behavior of `1 / x` was different depending on whether `0` or `0.0` was passed to a `x: float` parameter -- if `int` is a subtype of `float`, then any operation allowed on `float` (e.g. division) should have the same behavior on both types.
This means Python had to choose at least one:
1. division violates the liskov substitution principle
2. division by zero involving only integer inputs returns NaN
3. division by zero involving only float inputs throws exception
4. It's a type error to pass an int where a float is expected.
They went with option 3, and I think I agree that this is the least harmful/surprising choice. Proper statically typed languages don't have to make this unfortunate tradeoff.
C does different things for 0.0 / 0.0 and 0 / 0 and it's not that weird to deal with (well it has other issues like it being platform dependent what happens with this). JS has no problem with it either (0.0 / 0.0 gives nan, 0n / 0n gives exception since it are integers).
Python is the only language doing this (of the ones I use at least).
I don't think the notation `x: float = 0` existed when it was new by the way so that can't be the design reason?
since python seems to handle integer through integer divisions as float (e.g. 5 / 2 outputs 2.5), 0 / 0 giving nan would seem to be expected there
> liskov substitution principle
that would imply one is a subtype of another, is that really the case here? there are floats that can't be represented as an integer (e.g. 0.5) and integers that can't be represented as a double precision float (e.g. 18446744073709551615)