logoalt Hacker News

jabl01/16/20261 replyview on HN

> The writer is always finished before the reader starts in these scenarios. The issue is reads on one machine aren't guaranteed to be ordered after writes on a different machine due to write caching.

In such a case it should be sufficient to rely on NFS close-to-open consistency as explained in the RFC I linked to in the previous message. Closing a file forces a flush of any dirty data to the server, and opening a file forces a revalidation of any cached content.

If that doesn't work, your NFS is broken. ;-)

And if you need 'proper' cache coherency, something like Lustre is an option.


Replies

IshKebab01/16/2026

It wasn't my job so I didn't look into this fully, but the main issue we had was clients claiming that files didn't exist when they did. I just reread the NFS man page and I guess this is the issue:

> To detect when directory entries have been added or removed on the server, the Linux NFS client watches a directory's mtime. If the client detects a change in a directory's mtime, the client drops all cached LOOKUP results for that directory. Since the directory's mtime is a cached attribute, it may take some time before a client notices it has changed. See the descriptions of the acdirmin, acdirmax, and noac mount options for more information about how long a directory's mtime is cached.

> Caching directory entries improves the performance of applications that do not share files with applications on other clients. Using cached information about directories can interfere with applications that run concurrently on multiple clients and need to detect the creation or removal of files quickly, however. The lookupcache mount option allows some tuning of directory entry caching behavior.

People did talk about using Lustre or GPFS but apparently they are really complex to set up and maybe need fancier networking than ethernet, I don't remember.

show 1 reply