The answer is this is how theoretical programming languages have done it since the 1920s, borrowing from the mathematical notation of the time. As programming language theory has made more inroads into practical programming languages in the last 10-20 years, newer languages have borrowed the notation.
I'm biased, having been immersed in PL thoery for a while, but I prefer the colon notation. It works better with type inference, for example. Consider declaring a variable in two ways:
var x: int = 3;
// Now add type inference
var x = 3;
Vs int x = 3;
// Now add type inference
var x = 3; // We've just changed the type to a keyword and that's weird.
But that's just a personal preference.
I don't like how symmetrical the type theoretical colon denotation is, compared to the regular "element" notation. (They are more or less the same.) If instead of
we wrote (or any other not symmetrical character) then we could flip it, and use