rclone --multi-thread-streams allows transfers in parallel, like robocopy /MT
You can also run multiple instances of rsync, the problem seems how to efficiently divide the set of files.
Sometimes find (with desired maxdepth) piped to gnu-parallel rsync is fine.
robocopy! Wow, blast from the past. Used to use it all the time when I worked in a Windows shop.
My go-to for fast and easy parallelization is xargs -P.
find a-bunch-of-files | xargs -P 10 do-something-with-a-file
-P max-procs
--max-procs=max-procs
Run up to max-procs processes at a time; the default is 1.
If max-procs is 0, xargs will run as many processes as
possible at a time.
> efficiently divide the set of files.
It turns out, fpart does just that! Fpart is a Filesystem partitioner. It helps you sort file trees and pack them into bags (called "partitions"). It is developed in C and available under the BSD license.
It comes with an rsync wrapper, fpsync. Now I'd like to see a benchmark of that vs rclone! via https://unix.stackexchange.com/q/189878/#688469 via https://stackoverflow.com/q/24058544/#comment93435424_255320...
https://www.fpart.org/