Real time is about worst case time, not average.
The classic test for a real time OS is simple. You write a program that waits from an interrupt from an input pin. When the input is raised, there's an interrupt. That activates a user process. The user process turns on an output pin.
You hook this up to a signal generator and a scope. Feed in a square wave. See the delay between input and output. If there are outliers, delays much larger than the normal delay, the system is doing real time properly at all.
If it passes that test, run some other program at a lower priority than the one that's monitoring the input pin. The results should not change. The other program should reliably be preempted.
QNX could pass that test, at least when I last used it.
Note that this requirement conflicts with many modern CPU and OS features, such as sleep modes, power-saving under light load, and paging. Classically, you didn't care, because the control CPU used far less power than whatever large piece of equipment it was controlling. But for things that run on batteries, it's a problem.
Something that's given trouble: "system management mode". Some industrial computers used code running in system management mode to help out with some peripherals, such as making flash memory look like a hard drive. This time-stealing shows up in the interrupt latency test. QNX techs used to quietly have a blacklist of embedded computers to avoid.
> If there are outliers, delays much larger than the normal delay, the system is doing real time properly at all.
Just to clarify for anyone reading, I think you meant:
> If there are outliers, delays much larger than the normal delay, the system is not doing real time properly at all.