logoalt Hacker News

jordighlast Tuesday at 7:28 PM3 repliesview on HN

Well, one catch is that you're doing degree ten thousand. That alone already increases your computational costs and can run afoul of numerical error as your numbers start losing precision. You'll have to start worrying about overflow and underflow. Horner's method or another representation of your polynomial might start to become important. You'll also start to notice your CPU spiking more. In essence, this is kind of fitting more and more points until you get rid of every oscillation you don't want.

The other catch, which the author mentions, is that extrapolation is still essentially impossible with polynomials. This is easy to see. The highest-degree term will dominate all the other terms by an order of magnitude once you step out of the interval of interest. Every non-constant polynomial grows without bound.

Honestly, though, what the author is doing isn't much different than a Taylor approximation. If you're okay fitting any polynomial in a small neighbourhood of the data, then go whole hog and fit the best possible polynomial: a Taylor polynomial. You wouldn't usually need to go to that high degree to get what you want in a neighbourhood either.


Replies

grandempireyesterday at 2:15 AM

> Horner's method or another representation of your polynomial might start to become important.

Horner's is the default way to evaluate a polynomial - and I think you're overstating the cost of evaluation.

> what the author is doing isn't much different than a Taylor approximation

Yes it is. The Taylor approximation is only valid around a point - and it's based on matching the nth order derivative. There are no error metrics. And it requires differentiation to find it.

wbllast Tuesday at 7:42 PM

Taylor is for points. For functions on interval you want Chebyshev interpolation and probably want to throw in some poles to do better if L1 not L2 matters.