I was doing over 11M IOPS during that test ;-)
Edit: I wrote about that setup and other Linux/PCIe root complex topology issues I hit back in 2021:
That's super hot. Especially the update with the 37M IOPS reference. Might be very useful for my next tasks related to a setup with 6 NVMe disks: 1. Get all disks saturated through the network (including RDMA usage). 2. Play with io_uring to share a polling thread. Currently, no luck: if I share kernel poller between two devices then improvement is just +30% (at a cost of 1 core). Considering alternative schemes now.
FYI 11M IOPS in terms of AWS EBS is 138 gp3 volumes (80K IOPS each), which costs about $56K/month or about $1.3M over 2 years. If anyone was considering using EBS for high-IOPS workloads, don't.
I think your test had 10 980 Pros, which were probably around $120 each at the time (~$1200 total). SSDs are wildly more expensive now, but even if you spend $500 each, it's nowhere close to EBS.
It's apples vs oranges, but sometimes you just want fruit.