logoalt Hacker News

4x faster network file sync with rclone (vs rsync) (2025)

191 pointsby indigodaddylast Friday at 3:17 AM93 commentsview on HN

Comments

digiowntoday at 3:38 PM

Note there is no intrinsic reason running multiple streams should be faster than one [EDIT: "at this scale"]. It almost always indicates some bottleneck in the application or TCP tuning. (Though, very fast links can overwhelm slow hardware, and ISPs might do some traffic shaping too, but this doesn't apply to local links).

SSH was never really meant to be a high performance data transfer tool, and it shows. For example, it has a hardcoded maximum receive buffer of 2MiB (separate from the TCP one), which drastically limits transfer speed over high BDP links (even a fast local link, like the 10gbps one the author has). The encryption can also be a bottleneck. hpn-ssh [1] aims to solve this issue but I'm not so sure about running an ssh fork on important systems.

1. https://github.com/rapier1/hpn-ssh

show 10 replies
ericpauleytoday at 3:25 PM

Rclone is a fantastic tool, but my favorite part of it is actually the underyling FS library. I've started baking Rclone FS into internal Go tooling and now everything transparently supports reading/writing to either local or remote storage. Really great for being able to test data analysis code locally and then running as batch jobs elsewhere.

show 2 replies
coreylanetoday at 3:40 PM

RClone has been so useful over the years I built a fully managed service on top of it specifically for moving data between cloud storage providers: https://dataraven.io/

My goal is to smooth out some of the operational rough edges I've seen companies deal with when using the tool:

  - Team workspaces with role-based access control
  - Event notifications & webhooks – Alerts on transfer failure or resource changes via Slack, Teams, Discord, etc.
  - Centralized log storage
  - Vault integrations – Connect 1Password, Doppler, or Infisical for zero-knowledge credential handling (no more plain text files with credentials)
  - 10 Gbps connected infrastructure (Pro tier) – High-throughput Linux systems for large transfers
show 2 replies
edvardsiretoday at 8:19 PM

Interesting that nobody has mentioned: Warp speed Data Transfer (WDT)[1].

From the readme:

- Warp speed Data Transfer (WDT) is an embeddedable library (and command line tool) aiming to transfer data between 2 systems as fast as possible over multiple TCP paths.

- Goal: Lowest possible total transfer time - to be only hardware limited (disc or network bandwidth not latency) and as efficient as possible (low CPU/memory/resources utilization)

1. https://github.com/facebook/wdt

tonymettoday at 8:57 PM

golang concurrent IO is so accessible that even trivial IO transform scripts (e.g. compression, base64, md5sum/cksum) are very easy to multicore.

You'd be astonished at how much faster even seemingly fast local IO can go when you unblock the IO

ftchdtoday at 6:57 PM

Rclone is such an elegant piece of software, reminds me of the time where most software worked well most of the time. There's few people that wouldn't benefit from it, either as a developer or end-user.

I'm currently working on the GUI if you're interested: https://github.com/rclone-ui/rclone-ui

newsofthedaytoday at 4:19 PM

I prefer rsync because of its delta transfer which doesn't resend files already on the destination, saving bandwidth. This combined with rsync's ability to work over ssh lets me sync anywhere rsync runs, including the cloud. It may not be faster than rclone but it is more conserving on bandwidth.

show 3 replies
cachiustoday at 3:20 PM

rclone --multi-thread-streams allows transfers in parallel, like robocopy /MT

You can also run multiple instances of rsync, the problem seems how to efficiently divide the set of files.

show 4 replies
indigodaddytoday at 3:19 PM

One thing that sets rsync apart perhaps is the handling of hard links when you don't want to send both/duplicated files to the destination? Not sure if rclone can do that.

show 1 reply
xoatoday at 4:21 PM

Thanks for sharing, hadn't seen it but at almost the same time he made that post I too was struggling to get decent NAS<>NAS transfer speeds with rsync. I should have thought to play more with rclone! I ended up using iSCSI but that is a lot more trouble.

>In fact, some compression modes would actually slow things down as my energy-efficient NAS is running on some slower Arm cores

Depending on the number/type of devices in the setup and usage patterns, it can be effective sometimes to have a single more powerful router and then use it directly as a hop for security or compression (or both) to a set of lower power devices. Like, I know it's not E2EE the same way to send unencrypted data to one OPNsense router, Wireguard (or Nebula or whatever tunnel you prefer) to another over the internet, and then from there to a NAS. But if the NAS is in the same physically secure rack directly attached by hardline to the router (or via isolated switch), I don't think in practice it's significantly enough less secure at the private service level to matter. If the router is a pretty important lynchpin anyone, it can be favorable to lean more heavily on that so one can go cheaper and lower power elsewhere. Not that more efficiency, hardware acceleration etc are at all bad, and conversely sometimes might make sense to have a powerful NAS/other servers and a low power router, but there are good degrees of freedom there. Handier then ever in the current crazy times where sometimes hardware that was formerly easily and cheaply available is now a king's ransom or gone and one has to improvise.

kwanbixtoday at 6:44 PM

It is crazy to see how difficult google makes it for anyone to download their own pictures from google photos. Rclone used to allow you to download them, but not anymore. Only the ones uploaded by Rclone are available to download. I wish someone forced all cloud providers to allow you to download your own data. And no, google takout doesn't count. It is horrible to use.

show 1 reply
Dunedantoday at 5:02 PM

I wonder if the at least partially the reason for the speed up isn't the multi-threading, but instead that rclone maybe doesn't compress transferred data by default. That's what rsync does when using SSH, so for already compressed data (like videos for example) disabling SSH compression when invoking rsync speeds it up significantly:

  rsync -e "ssh -o Compression=no" ...
show 1 reply
aidenn0today at 5:02 PM

rclone is not as good as rsync for doing ad-hoc transfers; for anything not using the filesystem, you need to set up a configuration, which adds friction. It realy is purpose built for recurring transfers rather than "I need to move X to Y just once"

show 1 reply
KolmogorovComptoday at 4:18 PM

Why are rclone/rsync never used by default for app updates? Especially games with large assets.

show 1 reply
rurbantoday at 6:03 PM

Thanks for the lms tips in the comments. Amazing!

packetlosttoday at 3:48 PM

I use tab-complete to navigate remote folder structures with rsync all the time, does rclone have that?

show 1 reply
gjvctoday at 5:59 PM

May 6, 2025 May 6, 2025 May 6, 2025 May 6, 2025 May 6, 2025 May 6, 2025 May 6, 2025

sneaktoday at 5:14 PM

What’s sad to me is that rsync hasn’t been touched to fix these issues in what feels like decades.

show 1 reply
baal80spamtoday at 3:08 PM

I'll keep saying that rclone is a fantastic and underrated piece of software.

show 1 reply