Quick and easy parallel rsync

rsync is great, but one thing it doesn’t necessarily excel at is speed. Don’t get me wrong, it plenty fast in most cases, but there are plenty of opportunities to parallelize transfers and help saturate your pipe, but rsync is ultimately just one-file-at-a-time.

There are tons of articles on how to parallelize rsync, many of them being long shell scripts. Those may be what you’re after in some cases, but if you want something quick and easy, the info in this post should help.

This method assumes that you are in the directory you want to sync from, and there are multiple files/subdirectories there. What this approach does is simply list all contents, and create an rsync command for each. For example, if I have 30 subdirectories and 12 files, 42 rsync runs will be created by parallel. They won’t all run at once, though. By default, parallel will run as many jobs as you have cores on your computer. You can increase/decrease this as needed, though.

Don’t forget to adjust the rsync parameters.

Instead of using ls , you can use other techniques to create a list of files/directories to rsync at once, and cat it out to parallel .

One comment on “Quick and easy parallel rsync
  1. Justin Winokur says:

    This seems like it would work but may also requires an insane amount of overhead from the many handshake connections of rsync. You may be able to speed it up by using SSH multiplexing, but I still wonder if the overhead will kill any savings

Leave a Reply

Your email address will not be published. Required fields are marked *

*