logoalt Hacker News

Matumio11/10/20240 repliesview on HN

If by "modern" you mean a desktop/server CPU, the problem is the complexity that exists because of optimization for average throughput. Do you really know how long you will wait, worst-case, if three different cores flush the same cache line back to DRAM? Or maybe some niche hardware you use has some funky behaviour that stalls some hardware bus, or waits in the kernel for a millisecond, twice a month or whatever.

On the other hand, on FPGAs, deterministic timing is so very simple. Your output will not be a single clock cycle late even if something else goes wrong in logic running on the same FPGA (except through a connection that you control).

If you really want to know, OSADL has a QA Farm that monitors worst-case interrupt and scheduling latency for various CPUs and Linux versions.