> In general, file systems make for poor IPC implementations.
I agree but also they do have advantages such as simplicity, not needing to explicitly declare which files are needed, lazy data transfer, etc.
> you'll also want some mechanism for the writer to notify the reader that it's finished, be it with file locks, or some other entirely different protocol to send signals over the network.
The writer is always finished before the reader starts in these scenarios. The issue is reads on one machine aren't guaranteed to be ordered after writes on a different machine due to write caching.
It's exactly the same problem as trying to do multithreaded code. Thread A writes a value, thread B reads it. But even if they happen sequentially in real time thread B can still read an old value unless you have an explicit fence.
> The writer is always finished before the reader starts in these scenarios. The issue is reads on one machine aren't guaranteed to be ordered after writes on a different machine due to write caching.
In such a case it should be sufficient to rely on NFS close-to-open consistency as explained in the RFC I linked to in the previous message. Closing a file forces a flush of any dirty data to the server, and opening a file forces a revalidation of any cached content.
If that doesn't work, your NFS is broken. ;-)
And if you need 'proper' cache coherency, something like Lustre is an option.