Performance / CPU utilization

Tastal     Jan 9 1:05PM 2018 CLI

Recently I started evaluating if the Duplicacy CLI (linux binary) can be an option to backup my Synology NAS to Backblaze B2 as the Cloud Sync Synology implementation for B2 has so many flaws (mainly it doesn't encrypt file names, it is not block based in any way so it reuploads whole files even if 1KB change in a 100GB file and it is wasteful in other ways with B2 space).

So first of all I was happy to see the linux binary just work on my NAS when I SSH to it. So it is super easy to set up a manual scheduled task to backup the NAS with Duplicacy which is great.

The initial backup (~1,5 TB of files with very varying size) took about 5 days which is exactly what was to expect when my uploads speed was used at nearly 100%. So no bottleneck there and good job so far. However now the process for follow-up backups takes about 10 to 15 hours (2 tests so far and the third currently running) where it is noteworthy that there is very little upload (about 1 to 10 GB which should be uploaded in under one hour) and also nearly no disk IOPS, read/write or utilization. The CPU is constant at 25% for the duplicacy task. So as my Synology has an Intel Quad Core my hunch is it is using only one core and the process could maybe be accelerated a lot when it would use all or at least 3 cores. (Needless to say I'm using Duplicacy with encryption.)

So my questions now are:

  • Can I achieve somehow faster follow-up backups / use more CPU?
  • Is this in general a good setup or should I look for other solutions (which I don't like as I like the principles of Duplicacy very much! ;-))


gchen    Jan 9 1:35PM 2018

Are you running the backup with the -hash option? By default Duplicacy will find changed files by comparing timestamps and sizes, but if the -hash option is specified it will rescan every file and 10 to 15 hours is a reasonable time to rescan 1.5 TB.


Tastal    Jan 9 1:56PM 2018

Nope, this is all there is to my backup command:

./duplicacy_linux backup -threads 8

And this was my init:

./duplicacy_linux init -e -c 8M MyLib b2://Synology-MyLib 

Keys and passwords are held in environment variables. Duplicacy version is 2.0.10.

I was thinking of separating MyLib in 4 evenly big parts and running 4 separate duplicacy tasks in parallel for each one, but I don't really want to go through the big initial upload again and don't know if I would run in other problems ;-)

Btw, regardless of the speed improvements (which would be nice to have but if it's not possible I could live with it), do you see other problems with this scenario or do you think it is okay?

--

Update: Hmm, looking at my CPU I'm quite sure Duplicacy actually is rehashing. Maybe it can't access timestamps or store them like it's used to and therefore falls back to hashing? Can I check this somehow?

Also maybe interesting in this regard: MyLib is encrypted with the Synology encryption for shared folders (but Duplicacy obviously is backing up the already unlocked folder) and it is on a btrfs volume...


gchen    Jan 9 2:13PM 2018

Can you run ./duplicacy_linux backup -threads 8 -stats and send the output to duplicacy-support@acrosync.com?


Tastal    Jan 9 2:18PM 2018

yes, but it will probably take some hours ;-)

stay tuned...


gchen    Jan 9 2:25PM 2018

Maybe the Synology encryption is changing the timestamps so Duplicacy thinks that a lot of files have changed.

You can check this by running the history command on a file that you know didn't change:

duplicacy history -hash relative/path/to/file


Tastal    Jan 9 2:40PM 2018

If history should show me only one version there is something off. If it should show an entry for each snapshot/revision with matching dates and checksums everything is in order. Here is the output:

      1:         3895767 2012-07-31 20:45:27 8d7033d91e373acb30e75e040ba98999c7c4a5b2afae7741d17a5e7822359090 path/to/my/old/file.ext
      2:         3895767 2012-07-31 20:45:27 8d7033d91e373acb30e75e040ba98999c7c4a5b2afae7741d17a5e7822359090 path/to/my/old/file.ext
current:         3895767 2012-07-31 20:45:27 8d7033d91e373acb30e75e040ba98999c7c4a5b2afae7741d17a5e7822359090 path/to/my/old/file.ext

(Note that I terminated one of my follow-up backup tasks at some point so having only two old revisions should be normal. Exact "history" is init -> backup (canceled after some minutes) -> backup (complete) -> backup (canceled after some hours) -> backup (complete) -> backup (canceled to start it with -stats now).)


gchen    Jan 9 3:15PM 2018

Can you add -v to the backup command:

./duplicacy_linux -v backup -threads 8 -stats

Duplicacy will show what files it has rescanned.


Tastal    Jan 9 4:00PM 2018

restarted the job with -v

maybe I'll cancel it again as it looks like I get some interesting output also it did not complete. This is the output from the canceled -stats command:

Storage set to b2://Synology-MyLib
Last backup at revision 2 found
Indexing /volume2/MyLib
Loaded 2 include/exclude pattern(s)
Use 8 uploading threads
Skipped chunk 2 size 14941011, 2.85MB/s 08:26:59 0.0%
Skipped chunk 3 size 3197952, 2.88MB/s 08:21:06 0.0%
Skipped chunk 5 size 3972847, 2.34MB/s 10:16:35 0.0%
Skipped chunk 6 size 7615173, 2.58MB/s 09:20:30 0.0%
Skipped chunk 4 size 16237654, 3.99MB/s 06:02:26 0.0%
Skipped chunk 7 size 11527711, 4.22MB/s 05:42:24 0.0%
Skipped chunk 8 size 15528183, 4.64MB/s 05:11:01 0.0%
Skipped chunk 9 size 4515122, 4.62MB/s 05:12:25 0.0%
Uploaded chunk 1 size 13121963, 4.12MB/s 05:50:38 0.0%
Skipped chunk 10 size 30382986, 5.25MB/s 04:35:02 0.1%

[...]

Skipped chunk 2604 size 2650867, 5.24MB/s 03:12:20 30.3%
Skipped chunk 2607 size 5740195, 5.23MB/s 03:12:33 30.3%
Skipped chunk 2608 size 7121953, 5.23MB/s 03:12:33 30.3%
/volume2/MyLib/duplicacy_task: line 3:  9857 Terminated              /volume2/MyLib/duplicacy_linux backup -threads 8 -stats

This is its progress after 1.5h (but I'm not sure if it would have completed in 5h total this time than) and there are literally no other entries than these "Skipped chunk" and very rarely "Uploaded chunk" entries.


Tastal    Jan 9 4:29PM 2018

ow, I canceled to get a quick impression and I think I know what's going on: now I get Packing some/of/my/files output as well. The files Duplicacy is packing are VM images which make probably about 50% of the files and are very big each. Packing follows all the skipped chunks with very few uploaded ones, probably as only few parts in the image change but obviously the total file size and date changed. I'll wait for one complete run to verify this but I'm pretty sure that's it. I haven't thought about this ... :/ However, I can work around it by making more snapshots and thus avoiding 50-200GB files getting rescanned.

Under this circumstances the speed is probably good. My initial thought that CPU utilization could be higher as it seems there are no bottlenecks remains but more speed in this case would really only be "nice to have" and for sure no requirement (:

I'll check back if the output of a complete run comes with surprises. Thanks for this great kind of support till this point, that is really impressive!


gchen    Jan 9 9:18PM 2018

I would suggest separating VM images from other files and back up them to a different bucket using this init command:

duplicacy init -c 1M -max 1M -min 1M repository_id b2://bucket

By setting the 3 chunk size parameters to the same value, this essential disables the variable-size chunking algorithm and switches to the fixed-size chunking algorithm which is more suitable for virtual machine images (our ESXi backup tool, Vertical Backup, does exactly this). The fixed-size chunking algorithm can be 100% faster since it doesn't need to calculate the rolling checksum.

However, even with the default setting, the scanning speed of 4-5MB/s seems to be a bit slow to me. Can you create a small random file (say 1GB) and run the backup repeatedly with the -hash option to see how fast it can go?

duplicacy init repository_id b2://bucket
dd if=/dev/urandom of=1G bs=1M count=1024
duplicacy backup -stats -threads 8
duplicacy backup -stats -hash
duplicacy backup -stats -hash


Tastal    Jan 10 12:45PM 2018

  • Complete run finished without surprises, everything which is in the log are VM files which get rescanned at the speed we saw before (like before CPU was the only noteworthy used resource with 25% while the HDDs/Volume has nearly no utilization/IO at all) .

  • Thanks for the suggestion! I will restructure my NAS accordingly and backup VMs with fixed chunk sizes.

  • I created a test bucket and local test folder on the NAS where I ran your exact command. Unfortunately the 2.0.10 linux binary doesn't seem to take the -hash option even when I only use duplicacy backup -hash rehashing never takes place and the backup finishes immediately with only increasing the revision number. The output for for example ./duplicacy_linux -v backup -hash -stats (I tried -hash as first param and added -v to get more info) looks like this:

    Storage set to b2://Synology-Testing
    Enter Backblaze Account ID:xxx
    Enter Backblaze Application Key:xxx
    Downloading latest revision for snapshot repository_id
    Listing revisions for snapshot repository_id
    Last backup at revision 4 found
    Indexing /volume2/TempFiles/Synology-Testing
    Listing snapshots/
    Listing snapshots/repository_id/
    Listing chunks/
    Listing chunks/fb/
    Listing chunks/3d/
    Listing chunks/22/
    Listing chunks/05/
    Backup for /volume2/TempFiles/Synology-Testing at revision 5 completed
    Files: 3 total, 1,041M bytes; 0 new, 0 bytes
    File chunks: 223 total, 1,041M bytes; 0 new, 0 bytes, 0 bytes uploaded
    Metadata chunks: 3 total, 17K bytes; 0 new, 0 bytes, 0 bytes uploaded
    All chunks: 226 total, 1,041M bytes; 0 new, 0 bytes, 0 bytes uploaded
    Total running time: 00:00:03
    
  • Kinda off topic: I noticed some URL request 'https://pod-xxx.backblaze.com/b2api/v1/b2_upload_file/xxx/xxx' returned status code 503 statements in the log (very few and sometimes with 500). I'm guessing I can safely ignore them as I'm guessing Duplicacy will reupload the corresponding chunks. But just to be sure, am I right?


gchen    Jan 10 11:25PM 2018

It looks like there is a bug in the -hash option, but you can run touch * before backup which should be equivalent to -hash.

And yes, B2 upload servers return 503 or 500 errors all the times. When such an error occurs, Duplicacy would switch to a different upload server and attempt to upload the same chunk again, unless the maximum number of retries has been reached.


Tastal    Jan 11 10:25AM 2018

touch * triggered rehashing and I get very similar speed in this case (5.45MB/s overall in three tests).

Looking at htop (as it is more precise as Synologies resource monitor) I see one core, which also is the core duplicacy is using at that moment, sitting at 100% while the other three are nearly idle. duplicacy switches to other cores during hashing but seems to use only one at a time. Maxing out the process priority by decreasing its niceness does not change this. And -threads as its documentation suggests only seems to use more upload threads but has the same pattern in htop. I assume this is the bottleneck as the HDDs/IO can go much faster obviously but I'm not sure if multithreading for the hashing process could be achieved fairly easy and I'm also not really sure if it would increase the speed by ~ x4 as my machine-oriented programming expertise is quite low :p (It's an INTEL Atom C2538 @ 2.4 GHz btw so 5MB/s for something like hashing when using one core seems okay.)