logoalt Hacker News

hedoratoday at 3:11 PM3 repliesview on HN

Does syncthing work yet?

~ 5 years ago, I had a development flow that involved a large source tree (1-10K files, including build output) that was syncthing-ed over a residential network connection to some k8s stuff.

Desyncs/corruptions happened constantly, even though it was a one-way send.

I've never had similar issues with rsync or unison (well, I have in unison, but that's two-way sync, and it always prompted to ask for help by design).

Anyway, my decade-old synology is dying, so I'm setting up a replacement. For other reasons (mostly a decade of systemd / pulse audio finding novel ways to ruin my day, and not really understanding how to restore my synology backups), I've jumped ship over to FreeBSD. I've heard good things about using zfs to get:

saniod + syncoid -> zfs send -> zfs recv -> restic

In the absence of ZFS, I'd do:

rsync -> restic

Or:

unison <-> unison -> restic.

So, similar to what you've landed on, but with one size tier. I have docker containers that the phone talks to for stuff like calendars, and just have the source of the backup flow host my git repos.

One thing to do no matter what:

Write at least 100,000 files to the source then restore from backup (/ on a linux VM is great for this). Run rsync in dry run / checksum mode on the two trees. Confirm the metadata + contents match on both sides. I haven't gotten around to this yet with the flow I just proposed. Almost all consumer backup tools fail this test. Comments here suggest backblaze's consumer offering fails it badly. I'm using B2, but I haven't scrubbed my backup sets in a while. I get the impression it has much higher consistency / durability.


Replies

hiddendoom45today at 4:18 PM

I've personally had no major issues with syncthing, it just works in the background, the largest folder I have synced is ~6TB and 200k files which is mirroring a backup I have on a large external.

One particular issue I've encountered is that syncthing 2.x does not work well for systems w/o an SSD due to the storage backend switching to sqlite which doesn't perform as well as leveldb on HDDs, the scans of the 6TB folder was taking an excessively long time to complete compared to 1.x using leveldb. I haven't encountered any issues with mixing the use of 1.x and 2.x in my setup. The only other issues I've encountered are usually related to filename incompatibilites between filesystems.

SCdFtoday at 6:06 PM

I will say I specifically don't sync git repos (they are just local and pushed to github, which I consider good enough for now), and I am aware that syncthing is one more of those tools that does not work well with git.

syncthing is not perfect, and can get into weird states if you add and remove devices from it for example, but for my case it is I think the best option.

Nnnestoday at 3:53 PM

Anecdotally, I've been managing a Syncthing network with a file count in the ~200k range, everything synced bidirectionally across a few dozen (Windows) computers, for 9 years now; I've never seen data loss where Syncthing was at fault.

show 1 reply