I don't see how an operating system can work for a cluster.
You can have more than one CPU and more than one storage connected to one mainboard and that works because the interconnect fabric is very fast.
We don't have have the possibility to connect different computers at the same kind of speed that would let them work together seamlessly.
Check out Plan 9 and Mosix. They weren't super fast but they worked.
we built machines with all kinds of approach to this. ones with giant shared memories and memory networks. the tera MTA famously had uniform memory access, since all of the memories were on the other side of a network from the CPU, and hardware managed threads tried to hide that latency.
we built machines with RDMA that allowed fast one-sided transfers between memories at a decent fraction of the memory bandwidth. and operating systems that ran services to present a unified operating system interface on top of that.
there is a whole history of distributed operating systems if you're interested
One could argue that multiple cores are already not seamless especially if you have NUMA (now available in high-end desktops by the way! and every multi-socket system that's ever existed) and the distinction between RAM and disk is very not seamless and so is any other number of things you'd hope the OS would magically handwave away for you but it doesn't.
10Gbps is now very cheap and 100Gbps is viable at hobby scale. That's Ethernet. I don't know anything about CXL and so on.