Skip to main content

If I’m reading the docs correctly, Object Storages (Direct or SOBR) use one Task per backup chain.
These backups create MANY files.  It looks like I can limit the maximum concurrent Tasks for an S3 compatible backup, essentially limiting the number of concurrent backups.

I am, however, interested in increasing the speed of one very large backup.  I see many small objects are created.  At least in the case of SOBR, I assume that the data has already been deduped and compressed, and that and some sort of Task parallelism is happening during offloading.  Is this correct, and is there any way to increase the parallelism or number of Tasks used during SOBR offload and/or direct backup?

Offloading tasks will depend on each extent and the number of tasks it can handle along with the setting on the Object repository as well. If the Object tasks are higher than the repository can handle then it won't send more than that.  Hope this makes sense.


Comment