logoalt Hacker News

JuettnerDistriblast Friday at 4:20 PM1 replyview on HN

I'd be curious to know if this helps on supercomputers, which are notorious for frequently hanging for a few seconds on an ls -l.


Replies

mrlongrootslast Friday at 6:37 PM

It could, but important to keep in mind that the filesystem architecture there is also very different with a parallel filesystem with disaggregated data and metadata.

When you run `ls -l` you could potentially be enumerating a directory with one file per rank, or worse, one file per particle or something. You could try making the read fast, but I also think that it makes no sense to have that many files: you can do things to reduce the number of files on disk. Also many are trying to push for distributed object stores instead of parallel filesystems... fun space.