logoalt Hacker News

aapoalasyesterday at 7:25 PM1 replyview on HN

I do like the idea of this, but after writing a longer response explaining my positive view on this I came to a different conclusion: I was thinking this'd be a possibly useful measure for programs running on CPUs with contention, where your data will occasionally drop out of cache because the CPU is doing something else. But in that sort of a situation, you'd expect the L1 memory speed to be overtaken _before_ L1 memory size is reached. This function instead fits fairly well to the actual L1 size (as given by ChatGPT anyway), meaning that it's best thought of as a measure of random access speed on an uncontested CPU.

That being said, I do still like the fundamental idea of figuring out a rough but usable O-estimate for random memory access speeds in a program. It never hurts to have more quick estimation tools in your toolbox.


Replies

hinkleyyesterday at 9:06 PM

I had set associative caching on a final exam in 1993. We’ve had it for a long long time because it substantially improves the behavior with worst case eviction pattern relative to naive caches. The sophistication has crept up over time. But if somehow that trick had been missed by all computer engineers, I firmly believe we would have been having this discussion 25-30 years ago.