logoalt Hacker News

xnxlast Friday at 8:44 PM5 repliesview on HN

This formatting is more intuitive to me.

  L1 cache reference                   2,000,000,000 ops/sec
  L2 cache reference                   333,333,333 ops/sec
  Branch mispredict                    200,000,000 ops/sec
  Mutex lock/unlock (uncontended)      66,666,667 ops/sec
  Main memory reference                20,000,000 ops/sec
  Compress 1K bytes with Snappy        1,000,000 ops/sec
  Read 4KB from SSD                    50,000 ops/sec
  Round trip within same datacenter    20,000 ops/sec
  Read 1MB sequentially from memory    15,625 ops/sec
  Read 1MB over 100 Gbps network       10,000 ops/sec
  Read 1MB from SSD                    1,000 ops/sec
  Disk seek                            200 ops/sec
  Read 1MB sequentially from disk      100 ops/sec
  Send packet CA->Netherlands->CA      7 ops/sec

Replies

twotwotwolast Friday at 9:21 PM

Your version only describes what happens if you do the operations serially, though. For example, a consumer SSD can do a million (or more) operations in a second not 50K, and you can send a lot more than 7 total packets between CA and the Netherlands in a second, but to do either of those you need to take advantage of parallelism.

If the reciprocal numbers are more intuitive for you you can still say an L1 cache reference takes 1/2,000,000,000 sec. It's "ops/sec" that makes it look like it's a throughput.

An interesting thing about the latency numbers is they mostly don't vary with scale, whereas something like the total throughput with your SSD or the Internet depends on the size of your storage or network setups, respectively. And aggregate CPU throughput varies with core count, for example.

I do think it's still interesting to think about throughputs (and other things like capacities) of a "reference deployment": that can affect architectural things like "can I do this in RAM?", "can I do this on one box?", "what optimizations do I need to fix potential bottlenecks in XYZ?", "is resource X or Y scarcer?" and so on. That was kind of done in "The Datacenter as a Computer" (https://pages.cs.wisc.edu/~shivaram/cs744-readings/dc-comput... and https://books.google.com/books?id=Td51DwAAQBAJ&pg=PA72#v=one... ) with a machine, rack, and cluster as the units. That diagram is about the storage hierarchy and doesn't mention compute, and a lot has improved since 2018, but an expanded table like that is still seems like an interesting tool for engineering a system.

show 2 replies
VorpalWaylast Friday at 9:25 PM

Your suggestion confuses latency and throughput. So it isn't correct.

For example, a modern CPU will be able to execute other instructions while waiting for a cache miss, and will also be able to have multiple cache loads in flight at once (especially for caches shared between cores).

Main memory is asynchronous too, so multiple loads might be in flight, per memory channel. Same goes for all the other layers here (multiple SSD transactions in flight at once, multiple network requests, etc)

Approximately everything in modern computers is async at the hardware level, often with multiple units handling the execution of the "thing". All the way from the network and SSD to the ALUs (arithmetic logic unit) in the CPU.

Modern CPUs are pipelined (and have been since the mid to late 90s), so they will be executing one instruction, decoding the next instruction and retiring (writing out the result of) the previous instruction all at once. But real pipelines have way more than the 3 basic stages I just listed. And they can reorder, do things in parallel, etc.

show 1 reply
jesse__last Friday at 9:24 PM

I prefer a different encoding: cycles/op

Both ops/sec and sec/op vary on clock rate, and clock rate varies across machines, and along the execution time of your program.

AFAIK, Cycles (a la _rdtsc) is as close as you can get to a stable performance measurement for an operation. You can compare it on chips with different clock rates and architectures, and derive meaningful insight. The same cannot be said for op/sec or sec/op.

show 1 reply
CalChrislast Friday at 10:01 PM

I’ve seen this list many many times and I’m always surprised it doesn’t include registers.

show 1 reply
barfourelast Friday at 8:56 PM

The reason why that formatting is not used is because it’s not useful nor true. The table in the article is far more relevant to the person optimizing things. How many of those I can hypothetically execute per second is a data point for the marketing team. Everyone else is beholden to real world data sets and data reads and fetches that are widely distributed in terms of timing.

show 1 reply