Note there is no intrinsic reason running multiple streams should be faster than one [EDIT: "at this scale"]. It almost always indicates some bottleneck in the application or TCP tuning. (Though, very fast links can overwhelm slow hardware, and ISPs might do some traffic shaping too, but this doesn't apply to local links).
SSH was never really meant to be a high performance data transfer tool, and it shows. For example, it has a hardcoded maximum receive buffer of 2MiB (separate from the TCP one), which drastically limits transfer speed over high BDP links (even a fast local link, like the 10gbps one the author has). The encryption can also be a bottleneck. hpn-ssh [1] aims to solve this issue but I'm not so sure about running an ssh fork on important systems.
In general TCP just isn't great for high performance. In the film industry we used to use a commercial product Aspera (now owned by IBM) which emulated ftp or scp but used UDP with forward error correction (instead of TCP retransmission). You could configure it to use a specific amount of bandwidth and it would just push everything else off the network to achieve it.
> has a hardcoded maximum receive buffer of 2MiB
For completeness, I want to add:
The 2MiB are per SSH "channel" -- the SSH protocol multiplexes multiple independent transmission channels over TCP [1], and each one has its own window size.
rsync and `cat | ssh | cat` only use a single channel, so if their counterparty is an OpenSSH sshd server, their throughput is limited by the 2MiB window limit.
rclone seems to be able to use multiple ssh channels over a single connection; I believe this is what the `--sftp-concurrency` setting controls.
Some more discussion about the 2MiB limit and links to work for upstreaming a removal of these limits can be found in my post [3].
Looking into it just now, I found that the SSH protocol itself already supports dynamically growing per-channel window sizes with `CHANNEL_WINDOW_ADJUST`, and OpenSSH seems to generally implement that. I don't fully grasp why it doesn't just use that to extend as needed.
I also found that there's an official `no-flow-control` extension with the description
> channel behaves as if all window sizes are infinite. > > This extension is intended for, but not limited to, use by file transfer applications that are only going to use one channel and for which the flow control provided by SSH is an impediment, rather than a feature.
So this looks exactly as designed for rsync. But no software implements this extension!
I wrote those things down in [4].
It is frustrating to me that we're only a ~200 line patch away from "unlimited" instead of shitty SSH transfer speeds -- for >20 years!
[1]: https://datatracker.ietf.org/doc/html/rfc4254#section-5
[2]: https://rclone.org/sftp/#sftp-concurrency
[3]: https://news.ycombinator.com/item?id=40856136
[4]: https://github.com/djmdjm/openssh-portable-wip/pull/4#issuec...
> It almost always indicates some bottleneck in the application or TCP tuning.
Yeah, this has been my experience with low-overhead streams as well.
Interestingly, I see a ubiquity of this "open more streams to send more data" pattern all over the place for file transfer tooling.
Recent ones that come to mind have been BackBlaze's CLI (B2) and taking a peek at Amazon's SDK for S3 uploads with Wireshark. (What do they know that we don't seem to think we know?)
It seems like they're all doing this? Which is maybe odd, because when I analyse what Plex or Netflix is doing, it's not the same? They do what you're suggesting, tune the application + TCP/UDP stack. Though that could be due to their 1-to-1 streaming use case.
There is overhead somewhere and they're trying to get past it via semi-brute-force methods (in my opinion).
I wonder if there is a serialization or loss handling problem that we could be glossing over here?
Uhh.. I work with this stuff daily and there are a LOT of intrinsic reasons a single stream would be slower than running multiple: MPLS ECMP hashing you over a single path, a single loss event with a high BDP causing congestion control to kick in for a single flow, CPU IRQ affinity, probably many more I’m not thinking like the inner workings of NIC offloading queues.
Source: Been in big tech for roughly ten years now trying to get servers to move packets faster
Note there is no intrinsic reason running multiple streams should be faster than one
If the server side scales (as cloud services do) it might end up using different end points for the parallel connections and saturate the bandwidth better. One server instance might be serving other clients as well and can't fill one particular client's pipe entirely.
Wouldn't lots of streams speed up transfers of thousands of small files?
The author tried running rsyncd demon so it's not _just_ the ssh protocol.
Single file overheads (opening millions of tiny files whose metadata is not in the OS cache and reading them) appears to be an intrinsic reason (intrinsic to the OS, at least).
I mean isn't a single TCP connections throughput limited by the latency? Which is why in high(er) latency WAN links you generally want to open multiple connections for large file transfers.
> Note there is no intrinsic reason running multiple streams should be faster than one.
The issue is the serialization of operations. There is overhead for each operation which translates into dead time between transfers.
However there are issues that can cause singular streams to underperform multiple streams in the real world once you reach a certain scale or face problems like packet loss.