In this case wouldn't a Fourier-type approach work better? At least there's no risk the function blows up and it possibly needs fewer parameters?
Yeah, I thought the whole point of a polynomial approximation was that it is really only useful for the first couple of powers (quick and cheap) or because you have a particular process that you know a priori has a particular form (non-convergent, non-conservative, higher powered, etc.).
The "particular form" is important--if you don't know that then there is no reason for choosing x^10000 over x^5 (other than computational complexity). But there is also no reason for choosing x^5 over x^10000! Maybe that function really is flat and maybe it isn't. You really just don't know.
If you don't have anything too weird, Fourier is pretty much optimal in the limit--if a bit expensive to calculate. In addition, since you can "bandwidth limit" it, you can very easily control overfitting and oscillation. Even more, Fourier often reflects something about the underlying process (epicycles, for example, were correct--they were pointing at the fact that orbits were ellipses).
The problem with Fourier approximation is that it works terribly for relatively "simple" functions. E.g. fitting a linear relationship is extremely hard.