Skip to main content

Hello guys! 

The question is in the title 😄: How do you use Surebackup Job?

I ll not remade the speech why it’s important to apply the 3-2-1-1-0 rule and why we have to perform restore tests at recurring intervals. A backup it’s good, a verified backup without errors it’s better !

Personally I have two use cases

The first one: Application Group

I create a dedicated  job for an application group, when I need to create a sandbox, test specifics roles with vms who must be interconnected or if I need to test only one VM for troubleshoot for example.
In the configuration of my surebackup job I select only my Application group.

The second : Linked Job

In this case, I want to test all my VMs in backup jobs. Unlike a job in Application Group mode, which starts one vm at a time and if one of the tests on a vm fails, the job fails immediately.

With the Linked Job option, Veeam can start several vms in the Datalab environment in parallel, if a test fails on one vm the job continues to perform checks on the remaining machines.
In the configuration of my surebackup job I select only the “Linked Jobs” and I tune if necessary the roles, tests and other values.
 

Do you use a mix of Application group and Linked Job in the same job ?

I see an use case when you need to test some roles on vms which need to have access to a Domain controller, in this case you could start you Application Group containing your DC first.

Waiting for you feedback :)

I tend to test it in my homelab using both of these scenarios and anything else I can think of.  It is such a great tool for testing backups, etc.


Linked job, prod backup jobs are alternately scheduled every week for production data  during the month according backup & recovery we provide.

Month A

Week 1 jobs A, B

Week 2 jobs C

Week 3 jobs D,E

Week 4 jobs…

then start over


I see also both use cases at customer environments. And a 3rd one: simply test each VM with heartbeat and/or ping-test. But that's more for the auditors than it is for the people in charge to sleep better.


Normally I go with application groups and either test specific applications or only the critical systems. Linked jobs often are a bit too much in my opinion.


As a consultant, there’s always mixed appetite for testing, everyone wants, but not everyone is willing to put in the effort to resolve those problems, before they’re real/in production problems.

 

I actually push for daily testing, during the day whilst no backup/few backup jobs are running. SureBackup can be quite IO intensive and as it has no controls for number of VMs vs number of backup tasks running, which would be beneficial for managing available IO.

 

Depending on job structure I’ll tend to have just a DC in the application group, or if testing is suitable then say a DC + appropriate SQL Server VM in the application group, then just link the jobs.

 

This way you can answer the question confidently of “is my data recoverable” every time.

 

Finally, on a few occasions I’ve had customers that have had budget + appetite for VDRO and got them set up with DataLabs to take the testing one step further with their appropriate teams performing testing on the platform too.


I have seen a few things in my years here for Veeam SureBackups job and using the DataLab. Here are some of the highlights:

  • Testing of regular patches/updates for fiddlesome applications by hand
  • Testing operating system modernization/migration. Sounds crazy but when WS2008R2 went EOL, a lot of people used the DataLab to mount the WS2012R2 iso - and upgraded just fine. You can test it in the DataLab.
  • Test scripts / automation of things that are hard to do in production. 
  • Test security configurations and scans.
  • Training tool for new administrators of applications. Fiddle here not there LOL.
  • Business analytics/modeling in an application at a point in time. 
  • Cyber-range. Try to hack it in there. 

As a consultant, there’s always mixed appetite for testing, everyone wants, but not everyone is willing to put in the effort to resolve those problems, before they’re real/in production problems.

 

I actually push for daily testing, during the day whilst no backup/few backup jobs are running. SureBackup can be quite IO intensive and as it has no controls for number of VMs vs number of backup tasks running, which would be beneficial for managing available IO.

 

Depending on job structure I’ll tend to have just a DC in the application group, or if testing is suitable then say a DC + appropriate SQL Server VM in the application group, then just link the jobs.

 

This way you can answer the question confidently of “is my data recoverable” every time.

 

Finally, on a few occasions I’ve had customers that have had budget + appetite for VDRO and got them set up with DataLabs to take the testing one step further with their appropriate teams performing testing on the platform too.

Hello, in Linked Job mode, you can define the number of VMs you want to start simultaneously, this can limit the performance needed for the datalab test.

 


I have seen a few things in my years here for Veeam SureBackups job and using the DataLab. Here are some of the highlights:

  • Testing of regular patches/updates for fiddlesome applications by hand
  • Testing operating system modernization/migration. Sounds crazy but when WS2008R2 went EOL, a lot of people used the DataLab to mount the WS2012R2 iso - and upgraded just fine. You can test it in the DataLab.
  • Test scripts / automation of things that are hard to do in production. 
  • Test security configurations and scans.
  • Training tool for new administrators of applications. Fiddle here not there LOL.
  • Business analytics/modeling in an application at a point in time. 
  • Cyber-range. Try to hack it in there. 

This is a really great list and example for using SureBackup.  Some even I never thought of. 😋


I have seen a few things in my years here for Veeam SureBackups job and using the DataLab. Here are some of the highlights:

  • Testing of regular patches/updates for fiddlesome applications by hand
  • Testing operating system modernization/migration. Sounds crazy but when WS2008R2 went EOL, a lot of people used the DataLab to mount the WS2012R2 iso - and upgraded just fine. You can test it in the DataLab.
  • Test scripts / automation of things that are hard to do in production. 
  • Test security configurations and scans.
  • Training tool for new administrators of applications. Fiddle here not there LOL.
  • Business analytics/modeling in an application at a point in time. 
  • Cyber-range. Try to hack it in there. 

@Rick Vanover thanks for the share. Effectively, I have a customer, who create some datalabs for applications testing before send them on production, with the IP redirect (static IP), the test teams are unaware that they are working on a sandbox.


As a consultant, there’s always mixed appetite for testing, everyone wants, but not everyone is willing to put in the effort to resolve those problems, before they’re real/in production problems.

 

I actually push for daily testing, during the day whilst no backup/few backup jobs are running. SureBackup can be quite IO intensive and as it has no controls for number of VMs vs number of backup tasks running, which would be beneficial for managing available IO.

 

Depending on job structure I’ll tend to have just a DC in the application group, or if testing is suitable then say a DC + appropriate SQL Server VM in the application group, then just link the jobs.

 

This way you can answer the question confidently of “is my data recoverable” every time.

 

Finally, on a few occasions I’ve had customers that have had budget + appetite for VDRO and got them set up with DataLabs to take the testing one step further with their appropriate teams performing testing on the platform too.

Hello, in Linked Job mode, you can define the number of VMs you want to start simultaneously, this can limit the performance needed for the datalab test.

 

Hi @Stabz, you’re absolutely right you can, but what SureBackup doesn’t do, is pay any attention to other IO tasks underway, such as BCJs. Such as the below scenario:

 

  • Have a backup job that runs nightly
  • Have an immediate backup copy job
  • Have a SureBackup job that executes after backup job completes.

In the above scenario, the backup job finishes, and I want to test it immediately, however I will get a worse experience due to the immediate BCJ generating IO on the repository vs if I waited until the time that the BCJ should have normally completed by, and running it then. I’ve seen low end repositories choke hard at getting the VM to boot due to this and it then causes the VMs’ services to fail to start in an appropriate amount of time etc, or in the worst case scenario, the VM had a pending Windows Update reboot which then makes the boot up time even worse on top of this.

 

Veeam has no scheduling to allow me to specify that once the BCJ has finished, as it’s technically never finished in v11, so I can’t get SureBackup to run directly after the BCJ.

 

Not really a problem with flash storage of course, but if it’s a RAID5/6 on some 7.2k disks, it can become a performance problem quite quickly due to the difference between read & write IO costs to the array.


As a consultant, there’s always mixed appetite for testing, everyone wants, but not everyone is willing to put in the effort to resolve those problems, before they’re real/in production problems.

 

I actually push for daily testing, during the day whilst no backup/few backup jobs are running. SureBackup can be quite IO intensive and as it has no controls for number of VMs vs number of backup tasks running, which would be beneficial for managing available IO.

 

Depending on job structure I’ll tend to have just a DC in the application group, or if testing is suitable then say a DC + appropriate SQL Server VM in the application group, then just link the jobs.

 

This way you can answer the question confidently of “is my data recoverable” every time.

 

Finally, on a few occasions I’ve had customers that have had budget + appetite for VDRO and got them set up with DataLabs to take the testing one step further with their appropriate teams performing testing on the platform too.

Hello, in Linked Job mode, you can define the number of VMs you want to start simultaneously, this can limit the performance needed for the datalab test.

 

Hi @Stabz, you’re absolutely right you can, but what SureBackup doesn’t do, is pay any attention to other IO tasks underway, such as BCJs. Such as the below scenario:

 

  • Have a backup job that runs nightly
  • Have an immediate backup copy job
  • Have a SureBackup job that executes after backup job completes.

In the above scenario, the backup job finishes, and I want to test it immediately, however I will get a worse experience due to the immediate BCJ generating IO on the repository vs if I waited until the time that the BCJ should have normally completed by, and running it then. I’ve seen low end repositories choke hard at getting the VM to boot due to this and it then causes the VMs’ services to fail to start in an appropriate amount of time etc, or in the worst case scenario, the VM had a pending Windows Update reboot which then makes the boot up time even worse on top of this.

 

Veeam has no scheduling to allow me to specify that once the BCJ has finished, as it’s technically never finished in v11, so I can’t get SureBackup to run directly after the BCJ.

 

Not really a problem with flash storage of course, but if it’s a RAID5/6 on some 7.2k disks, it can become a performance problem quite quickly due to the difference between read & write IO costs to the array.


Hello @MicoolPaul, yes you are right on this point.
This is why I don't recommend using Surebackup on poor performing storage or else during a window where there is no activity for a limited selection of vms.


A job priority would be nice to limit Veeam to execute a Backup Copy and a Surebackup job in same time.

 


Comment