I've been running either Qubes OS or KVM/QEMU based VMs as my desktop daily driver for 10 years. Nothing runs on bare metal except for the host kernel/hypervisor and virt stack.
I've achieved near-native performance for intensive activities like gaming, music and visual production. Hardware acceleration is kind of a mess but using tricks like GPU passthrough for multiple cards, dedicated audio cards and and block device passthrough, I can achieve great latency and performance.
One benefit of this is that my desktop acts as a mainframe, and streaming machines to thin clients is easy.
My model for a long time has been not to trust anything I run, and this allows me to keep both my own and my client's work reasonably safe from a drive-by NPM install or something of that caliber.
Now that I also use a Apple Silicon MacBook as a daily driver, I very much miss the comfort of a fully virtualized system. I do stream in virtual machines from my mainframe. But the way Tahoe is shaping up, I might soon put Asahi on this machine and go back to a fully virtualized system.
I think this is the ideal way to do things, however, it will need to operate mostly transparently to an end user or they will quickly get security fatigue; the sacrifices involved today are not for those who lack patience.
Also, relevant XKCDs:
https://www.explainxkcd.com/wiki/index.php/2044:_Sandboxing_...
> One benefit of this is that my desktop acts as a mainframe,
Are you for real? Tell us you've never worked on a mainframe without telling us you've ever worked on a mainframe.
I think it's fine if you do it for yourself. It's a bit of a poor man's Linux-turned-microkernel solution. In fact, I work like this too, and this extends to my Apple Silicon Mac. The separation does have big security advantages, especially when different pieces of hardware are exclusively passed to the different, closed-off "partitions" of the system and the layer orchestrating everything is as minimal as it gets, or at least as guarded against the guests as it gets.
What worries me is when this model escalates from being cobbled up together by a system administrator with limited resources, to becoming baked into the design of software; the appropriation of the hypervisor layer by software developers who are reluctant to untangle the mess they've created at the user/kernel boundary of their program and instead start building on top of hardware virtualization for "security", to ultimately go on and pollute the hypervisor as the level of host OS access proves insufficient. This is beautifully portrayed by the first XKCD you've linked. I don't want to lose the ability to securely run VMs as the interface between the host and the guest OSes grows just as unmanageable as that of Linux and BSD system calls and new software starts demanding that I let it use the entirety of it, just like some already insists that I let it run as root because privilege dropping was never implemented.
If you develop software, you should know what kind of operating system access it needs to function and sandbox it appropriately, using the operating system's sandboxing facilities, not the tools reserved for system administrators.