Who needs throughput? Server software, but Linux is already dominant there and everyone's happy. For everything else, including desktop and mobile operating systems realtime sounds like a good idea.
Absolutely deterministic latency isn't a requirement for the desktop like it is for industrial control.
If 99% of the time performance is faster, and the remaining 1% of the time isn't that bad, most people will take the trade of higher average performance. If it's running the antilock braking in your car, that's pretty bad.
> Who needs throughput? Server software, but Linux is already dominant there and everyone's happy. For everything else, including desktop and mobile operating systems realtime sounds like a good idea.
No, not at all. The actual question is who needs realtime.
Desktop is a typical place where you absolutely don't need realtime. You just want latency to be under the user perception threshold but that's easily achievable with a standard kernel. Plus the consequence of failing is basically nil. Why bother with the complexity introduced by a RT kernel?
The latency in modern desktop is not a consequence of unpredictable scheduling. It's just poorly optimised applications. A RT kernel is not a magical solution to that.
> For everything else, including desktop and mobile operating systems realtime sounds like a good idea.
Why on earth would you need real time on desktop? Every time the topic is discussed on internet bunch of confused people chime in with this sentiment that doesn't make much sense. "RTOS" is not some magic that somehow makes everything on your desktop fast. All it would do in reality is make everything slower for 99% of your interaction but guarantee that "slowness" is uniform and you don't have any weird latency spikes in other 1%. Note that for cases that are not "nuclear reactor control system that requires Very Certified OS with audited and provable reaction times" RTLinux is already available, but distros are not inclined to leverage it on desktop for reason described above.
Desktop use at its lowest latency is typically 17ms and up to 33ms when using a 60Hz screen with one frame in the buffer and whether you have to wait for the one after that. But most operating systems and games default to more than one buffered frame. Still, even that lowest case scenario is a pretty big budget of time. The microsecond precision of a RTOS is not necessary then. If you want to increase responsiveness or reduce latency for interactive use of desktops or phones there are different avenues to approach to shave off milliseconds at a time.
That's waaaay too broad a grouping. You're thinking of PC's, basically, in one form or another (or their modern replacement, mobile devices).
Industrial automation and hard real-time embedded devices like those that control surfaces in Fly-By-Wire systems are out there and with those it's not a matter of "throughput" but a matter of timeliness.
I did a major project with QNX about a decade ago where the maximum allowed latency between any device on the (fiber-optic) network being asked a question by the master controller and the reply - then the processing of the reply - was less than 1 millisecond, full stop.
Come in late and most of the time the sky doesn't fall in. But on the occasions that it does, you potentially have a 4 meter wide ribbon of red-hot steel flying out of the stands at 50mph. Or you blow one of the transistors that controls the 10-20 megawatt DC motors in each stand. (These "transistors" were boxes about 7' tall, filled with oil, and cost northwards of $200k).
Sometimes it just doesn't pay to be late.