logoalt Hacker News

shootoday at 9:18 PM0 repliesview on HN

The underlying argument this article seems to be making is that an appropriate algorithm for any given application isn't always the one with the most efficient asymptotic performance for sufficiently large n -- for a given application (in this case routing), we have data on typical values of n that appear in reality and we can choose an algorithm that offers good enough (or optimal) performance for n in that constrained range, as well as potentially having other benefits such being simpler to implement correctly in a short amount of engineering time.

This argument is very much in line with Mike Acton's data-driven design philosophy [1] -- understand the actual specific problem you need to solve, not the abstract general problem. Understand the statistical distribution of actual problem instances, they'll have parameters in some range. Understand the hardware you're building writing software for, understand its finite capacity & capability.

It's common that new algorithms or data-structures with superior asymptotic efficiency are less performant for smaller problem sizes vs simpler alternatives. As always, it depends on the specifics of any given application.

[1] see Mike's CppCon 2014 talk "Data-Oriented Design and C++" https://www.youtube.com/watch?v=rX0ItVEVjHc