logoalt Hacker News

maxbondtoday at 3:28 AM0 repliesview on HN

> Repeatedly, in a reproducible way, for events in the arrow of time? We can test this by going back to 1945 and running forward again?

This is a frequentist mental model - all well and good, but frequentism and Bayesianism are different schools of statistics. Where frequentism asks the question, "if I keep drawing samples from this distribution, what does the histogram converge to?" Bayesianism asks the question, "given my prior understanding and a new piece of evidence (a new sample), how should I adjust my hypothesis about what distribution it is I am sampling from?". (That is really boiled down, and the frequentist part is maybe even butchered.)

Among other applications this enables us to estimate a distribution for which we have a tiny number of samples. A problem I'm interested in is called the Doomsday Argument, which estimates how long humanity will survive using your birth order (the number of humans born before you) and the anthropic principle (we assume you were not born unusually early or unusually late but closer to the mode); interestingly, everything you observe in the universe is already factored into this measurement, so you can't ever get a second sample. Obviously the opportunity for error with 1 measurement is huge, but you can come up with a number and it isn't arbitrary, it is a real estimate.

Similarly, we only have about 80 samples of years in which it was possible to have a nuclear exchange, so a fairly small sample size, but we can still get a noisey estimate. But I haven't read On The Edge yet, so I don't know exactly what Silver does here.

>> This is kind of the point being made.

> Was it?

I think they meant that all of the solutions people invented to prevent nuclear war and which commentators failed to anticipate is reflected within the true probability distribution and within our dataset. So it is captured in our estimate, to the best of our abilities and given the limited data we have.