> poor locking support (this sounds like it works better)
File locking on Unix is in general a clusterf*ck. (There was a thread a few days ago at https://news.ycombinator.com/item?id=46542247 )
> no manual fence support; a bad but common way of distributing workloads is e.g. to compile a test on one machine (on an NFS mount), and then use SLURM or SGE to run the test on other machines. You use NFS to let the other machines access the data... and this works... except that you either have to disable write caches or have horrible hacks to make the output of the first machine visible to the others. What you really want is a manual fence: "make all changes to this directory visible on the server"
In general, file systems make for poor IPC implementations. But if you need to do it with NFS, the key is to understand the close-to-open consistency model NFS uses, see section 10.3.1 in https://www.rfc-editor.org/rfc/rfc7530#section-10.3 . Of course, you'll also want some mechanism for the writer to notify the reader that it's finished, be it with file locks, or some other entirely different protocol to send signals over the network.
> In general, file systems make for poor IPC implementations.
I agree but also they do have advantages such as simplicity, not needing to explicitly declare which files are needed, lazy data transfer, etc.
> you'll also want some mechanism for the writer to notify the reader that it's finished, be it with file locks, or some other entirely different protocol to send signals over the network.
The writer is always finished before the reader starts in these scenarios. The issue is reads on one machine aren't guaranteed to be ordered after writes on a different machine due to write caching.
It's exactly the same problem as trying to do multithreaded code. Thread A writes a value, thread B reads it. But even if they happen sequentially in real time thread B can still read an old value unless you have an explicit fence.