Initial backup error - 500 Internal Server Error

Jocelyn     Aug 13 8:55AM 2016 CLI

The initial backup is not able to terminate.

Here is a snipped of the output, with verbosity and debug enabled:

2016-08-13 09:40:02.836 DEBUG CHUNK_EXIST Skipped chunk adb30f485e6072fffeae32d4cecc5a7008cefcf0673642516f81cd3c45a0a4ed in the storage 2016-08-13 09:40:02.836 INFO UPLOAD_PROGRESS Skipped chunk 9351 size 63137, 194KB/s 16:20:18 6.0% 2016-08-13 09:40:03.323 DEBUG CHUNK_DUPLICATE Chunk 5d85c6e26bdd1cb3070a8cba8c4bf672769d5198e9ad5481376db012ab752d6d already exists 2016-08-13 09:40:03.323 DEBUG CHUNK_EXIST Skipped chunk 5d85c6e26bdd1cb3070a8cba8c4bf672769d5198e9ad5481376db012ab752d6d in the storage 2016-08-13 09:40:03.323 INFO UPLOAD_PROGRESS Skipped chunk 9352 size 137150, 194KB/s 16:20:22 6.0% 2016-08-13 09:40:03.951 DEBUG CHUNK_DUPLICATE Chunk ffee93fbfed12b244c2b7daaa45da0b2c45d5e12917fe76ca719e0dfea833139 already exists 2016-08-13 09:40:03.951 DEBUG CHUNK_EXIST Skipped chunk ffee93fbfed12b244c2b7daaa45da0b2c45d5e12917fe76ca719e0dfea833139 in the storage 2016-08-13 09:40:03.951 INFO UPLOAD_PROGRESS Skipped chunk 9353 size 28313, 194KB/s 16:20:20 6.0% 2016-08-13 09:40:09.293 ERROR UPLOAD_CHUNK Failed to find the path for the chunk c929d28983e4d11dde1116de9d3eb1167af961bd6db25252a9c2ab5c178dbb82: 500 Internal Server Error {2 UPLOAD_CHUNK Failed to find the path for the chunk c929d28983e4d11dde1116de9d3eb1167af961bd6db25252a9c2ab5c178dbb82: 500 Internal Server Error}


gchen    Aug 13 12:22PM 2016

which storage are you using?


Jocelyn    Aug 13 12:42PM 2016

Amazon S3


gchen    Aug 13 1:37PM 2016

I'm not sure of the exact cause -- it could have been a transient network error; or the retry count (of 5) is not enough. However, I do notice that chunk sizes are small. Using a bigger average size (for example the default 4M) will definitely help and improve the upload speed significantly.


Jocelyn    Aug 13 6:16PM 2016

I restarted the backup multiple times and the same issue occurs each time.

I'm doing a backup of a mail directory for multiple users. It's around 40K files with an average size of 175KB. in this scenario, a chunk size of 4MB would be good?


gchen    Aug 13 7:08PM 2016

Duplicacy actually combines files first before splitting them into chunks, so it doesn't matter how small the files are. By using a larger chunk size you will reduce the API request rate and that is why I think it would help.


Jocelyn    Aug 15 5:09AM 2016

Thanks for the info about chunk size.

So I tried again by re-initializing the storage with the default chunk size. Although the backup went further, it still failing:

2016-08-15 06:00:00.444 INFO UPLOAD_PROGRESS Uploaded chunk 1511 size 2933937, 6.44MB/s 00:11:56 61.0% 2016-08-15 06:00:00.459 INFO PACK_END Packed var/vmail/.Sent/cur/1471033506.M213103P11619.mail-srv,S=500826,W=507363:2,S (500826) 2016-08-15 06:00:00.460 TRACE PACK_START Packing var/vmail/.Sent/cur/1471033509.M485495P11619.mail-srv,S=3023058,W=3062341:2,S 2016-08-15 06:00:07.594 ERROR UPLOAD_CHUNK Failed to upload the chunk d1e8c6b4aa9f16f6173413a0e079697f5ae9ba5bc2b48c3ed7f032b980ba25e1: We encountered an internal error. Please try again. {2 UPLOAD_CHUNK Failed to upload the chunk d1e8c6b4aa9f16f6173413a0e079697f5ae9ba5bc2b48c3ed7f032b980ba25e1: We encountered an internal error. Please try again.}r


Jocelyn    Aug 15 3:03PM 2016

After few tries, the backup finally terminated...


gchen    Aug 16 5:47PM 2016

I uploaded a new version with an improved retry mechanism for S3.


Jocelyn    Aug 22 7:47PM 2016

Thank you, I will try it.