Is there some obvious reason not to measure requests per minute rather than second? Or is it an offhand joke?
Some systems I've worked on had APIs that averaged less than one per second, but I don't think we want to be measuring in millibecquerels. Some have measured on millions of requests per hour, because the hourly usage was a key quantity, as rate limits were hourly as well.
I guess there's a difference between talking about how many requests a system is capable of handling, and how many they actually get.
At least when i encountered the discussion initially (some thirty years ago) I'd say we usually talked about how many requests the system was capable of handling. Then requests per second was the obvious unit since a request usually took less than a second to process (obviously depending on the system and so on - but mostly), so using that unit often gave a fairly low, comprehensible number.
Was it ten? A hundred (very impressive)? Perhaps even a thousand (very, very impressive!)?
Multiply those numbers by 60, and there's suddenly a lot more mental gymnastics involved. By 3600 and you're well into "all big numbers look the same" land.
In my experience, rate limits are more often per second. It's easy to talk about kilo or mega-units, so this isn't as big an issue as the awkwardness of talking about very very low volume services. Maybe those (generally) inherently don't care about rates as much?
>Is there some obvious reason not to measure requests per minute rather than second?
It's much less obtuse to say something like "average req/min" or whatever, but then again you can't write a cool blog post about misusing an SI unit for radioactivity and shoving it into a nonsensical context.
If you're looking at SI units, the base unit for time is the second.
SI multipliers are all powers of 10, not 60, so minutes and hours are not really compliant.