How is it not general knowledge? How do you otherwise gauge if your program is taking a reasonable amount of time, and, if not, how do you figure out how to fix it?
You gauge with metrics and profiles, if necessary, and address as needed. You don’t scrutinize every line of code over whether it’s “reasonable” in advance instead of doing things that actually move the needle.
But these performance numbers are meaningless without some sort of standard comparison case. So if you measure that e.g. some string operation takes 100ns, how do you compare against the numbers given here? Any difference could be due to PC, python version or your implementation. So you have to do proper benchmarking anyway.
In my experience, which is series A or earlier data intensive SaaS, you can gauge whether a program is taking a reasonable amount of time just by running it and using your common sense.
P50 latency for a fastapi service’s endpoint is 30+ seconds. Your ingestion pipeline, which has a data ops person on your team waiting for it to complete, takes more than one business day to run.
Your program is obviously unacceptable. And, your problems are most likely completely unrelated to these heuristics. You either have an inefficient algorithm or more likely you are using the wrong tool (ex OLTP for OLAP) or the right tool the wrong way (bad relational modeling or an outdated LLM model).
If you are interested in shaving off milliseconds in this context then you are wasting your time on the wrong thing.
All that being said, I’m sure that there’s a very good reason to know this stuff in the context of some other domains, organizations, company size/moment. I suspect these metrics are irrelevant to disproportionately more people reading this.
At any rate, for those of us who like to learn, I still found this valuable but by no means common knowledge