The i486DX 33MHz was introduced in May 1990. A 30x increase, or about five doublings, in clock speeds over ten years. That's of course not the whole truth; the Athlon could do much more in one cycle than the 486. In any case, in 2010 we clearly did not have 30GHz processors – by then, the era of exponentially rising clock speeds was very decidedly over. I bought an original quadcore i7 in 2009 and used it for the next fifteen years. In that time, roughly one doubling in the number of cores and one doubling in clock speeds occurred.
It is true that we haven’t seen single core clock speeds increasing as fast, for a long while now. And I think everyone agrees that some nebulously defined “rate of computing progress” has slowed down.
But, we can be slightly less pessimistic if we’re more specific. Already by the early 90’s, a lot of the clock speed increase came from strategies like pipelines, superscalar instructions, branch prediction. Instruction level parallelism. Then in 200X we started using additional parallelism strategies like multicore and SMT.
It isn’t a meaningless distinction. There’s a real difference between parallelism that the compiler and hardware can usually figure out, and parallelism that the programmer usually has to expose.
But there’s some artificiality to it. We’re talking about the ability of parallel hardware to provide the illusion of sequential execution. And we know that if we want full “single threaded” performance, we have to think about the instruction level parallelism. It’s just implicit rather than explicit like thread-level parallelism. And the explicit parallelism is right there in any modern compiler.
If the syntax of C was slightly different, to the point where it could automatically add OpenMP pragmas to all it’s for loops, we’d have 30GHz processors by now, haha.
Clock speed increases definitely slowed down, but now that software can use parallelism better, we're seeing big wins again. Current desktop/laptop packages are doing 100 trillion operations per second. The article's processor could do one floating point op per cycle, or 1B ops. So, we've seen a 100,000x speedup in the last 25 years. That's a doubling every ~ 1.5 years since 2000.
It's not quite apples-to-apples, of course, due to floating point precision decreasing since then, vectorization, etc, but it's not like progress stopped in 2000!
On the plus side, the 486DX-33 didn’t require active cooling. The second half of the 1990s was when home computing became noisy, and the art of trying to build silent PCs began.
"The era of exponentially rising clock speeds" was already over in 2003, when the 130-nm Pentium 4 reached 3.2GHz.
All the later CMOS fabrication processes, starting with the 90-nm process (in 2004), have provided only very small improvements in the clock frequency, so that now, 23 years later after 2003, the desktop CPUs have not reached a double clock frequency yet.
In the history of computers, the decade with the highest rate of clock frequency increase has been 1993 to 2003, during which the clock frequency has increased from 67 MHz in 1993 in the first Pentium, up to 3.2 GHz in the last Northwood Pentium 4. So the clock frequency had increased almost 50 times during that decade.
For comparison, in the previous decade, 1983 to 1993, the clock frequency in mass-produced CPUs had increased only around 5 times, i.e. at a rate about 10 times slower than in the next decade.