The times seem sublinear, 10k files is less than 10x 1k files.
I remember getting in to a situation during the ext2 and spinning rust days where production directories had 500k files. ls processes were slow enough to overload everything. ls -F saved me there.
And filesystems got a lot better at lots of files. What filesystem was used here?
It's interesting how well busybox fares, it's written for size not speed iirc?
Ext2 never got better with large directories even with SSDs (this includes up to ext4). The benchmarks don’t include the filesystem type, which is actually extremely important when it comes to the performance of reading directories.
> The times seem sublinear, 10k files is less than 10x 1k files
Two points are not enough to say it's sublinear. It might very well be some constant factor that becomes less and less important the bigger the linear factor becomes.
Or in other words 10000n+C < 10000(n+C)