Skip to main content

I recently created a new backup copy job that targets 5 copy jobs. It appears that when the backup copy job runs that it processes all of these targets simultaneously rather than sequentially. Because the target repository is a spinning disk, I am concerned about what this means for performance and unnecessary disk fragmentation. Is there a way to change this behavior and have the job copy over the data from each backup job one at a time?

You can limit the target repository's concurrent tasks to accomplish this. However, it being based on spinning disks doesn't mean it will have any problem with multiple tasks running at the same time.


If the repository server is separate and depending on the resources (CPU/RAM) limiting the number of tasks is the way to go as Tommy mentioned.  Spinning disks won’t be a factor but rather resources and task slots.


@Tommy O'Shea and ​@Chris.Childerhose  Unfortunately, limiting the concurrent tasks on the repository doesn’t seem to change the backup copy job actually processing and writing all the data simultaneously in parallel to the repository. It seems that the setting only limits the number of jobs running, not the behavior of the job itself.

My main concern for performance would be that the job is writing to multiple files simultaneously which would create more disk fragmentation than just writing out one file at a time. Is there something I’m missing about why this should no longer be a performance concern like it has been in the past?


I will agree with Tommy and Chris. CPU and memory is important to calculate tasks.

Its good to check benchmarks of disk. How many writes is able to provide without dramatical decrease of performance. If you have several disks in raid, you can have high concurrent tasks without performance degradetion.


@Tommy O'Shea and ​@Chris.Childerhose  Unfortunately, limiting the concurrent tasks on the repository doesn’t seem to change the backup copy job actually processing and writing all the data simultaneously in parallel to the repository. It seems that the setting only limits the number of jobs running, not the behavior of the job itself.

My main concern for performance would be that the job is writing to multiple files simultaneously which would create more disk fragmentation than just writing out one file at a time. Is there something I’m missing about why this should no longer be a performance concern like it has been in the past?

This type of thing is not relevant any more in the newer versions of Veeam.  Also another thing to check is if you are using the new backup chain method as it will create a chain for each VM in the backup job - Backup Chain Formats - User Guide for VMware vSphere


Comment