Skip to main content

Hi all, 

Running Veeam Backup and Replication V11 on an Enterprise license - we have been backing up to HPE StoreOnce repositories for nearly a year, but due to operational changes with more VM’s, additional encryption and retention requirements we are filling these up faster than anticipated (less de-dupe). 

We would like to move our older monthly/yearly backups to Azure storage for long term retention in order to free up space on the StoreOnce devices for our nightly/weekly backups and restores. 

Initial research suggests that creating a Scale-Out repository, adding the existing StoreOnce repository as the performance tier extant and a new Azure object storage container as the capacity tier extant is the way to go. The documentation confirms this will re-point all existing jobs to use the new scale-out repository when they next run and anything older than X days (whatever we set in the capacity tier option) will be automatically offloaded to the capacity tier. 

 

The questions I have are:

  1. Is this the most practical method to free up space on our existing repositories and move our older monthly backups to the cloud for retention? Is there another way we should be considering?
     
  2. At what point does the copy/offload task run during this process? Is it at the point of creating the new scale-out repository if we specify capacity tier at the time of creation?
     
  3. How resilient is the off-load/copy process? We have probably 15-20TB to offload on a slow link (100Mbps) so it will take many weeks. It is a dedicated circuit which will be used only for this purpose but we can’t guarantee that it wont be interrupted at some point during the process. 
     
  4. How careful should we be regarding running these off-load/copy tasks whilst our normal production backups are running. We can set the copy window options to not allow copying during our backup windows - is this required/recommended?
     
  5. How long will we have to wait before we start to reclaim space? Will veeam mark a backup file for deletion as soon as it has copied it to the capacity tier or will we need to wait until the entire initial off-load has completed.

 

Any input greatly appreciated! 

Thanks, 

 

To start off I would suggest upgrading to v12 as you can use direct to object versus SOBR offloading.  This way you can control the job and when/how long it runs.  The offloading will work but as you said the time it takes will be long. Same with a backup to object but you can control it better with scheduling.  Everything you noted will work and if the goal is free up space then you will need to use offloading since direct does not free up space without changing retention settings.


One thing to consider is the deup / compression ratio. You could potentially move a large amount of data and still not free up much space. For example I deleted a 19TB backup chain on a StoreOnce and once the housekeeping had ran on the StoreOnce it only increased free space by 3TB.


Hi Chris, thanks for your reply. 

So if I was to upgrade to V12 would I be able to create a copy job on a per VM or per Backup job basis and once the copy had completed I could change the retention settings on the original job? Veeam would then remove the older files at some point after the change?

Whether VM or JOB, it looks like I would need to copy every available restore point at once. There isn’t a way for me to, for example, create a copy job to copy January’s month end backup on it’s own, and then February’s later on. Is this correct?

Many thanks, 


Hi Chris, thanks for your reply. 

So if I was to upgrade to V12 would I be able to create a copy job on a per VM or per Backup job basis and once the copy had completed I could change the retention settings on the original job? Veeam would then remove the older files at some point after the change?

Whether VM or JOB, it looks like I would need to copy every available restore point at once. There isn’t a way for me to, for example, create a copy job to copy January’s month end backup on it’s own, and then February’s later on. Is this correct?

Many thanks, 

Yes that is correct you create a job based on VM or another job then once copied off change the retention for the original job.  A backup copy job would be ideal in this scenario.

There is no way to specify months or specific periods unfortunately to copy off it is all backup points.


One thing to consider is the deup / compression ratio. You could potentially move a large amount of data and still not free up much space. For example I deleted a 19TB backup chain on a StoreOnce and once the housekeeping had ran on the StoreOnce it only increased free space by 3TB.

I’ve been here too.  Shrinking retention in my jobs to free up space… I got rid of weeks of backups and ended up saving next to nothing. 


Hi Chris, thanks for your reply. 

So if I was to upgrade to V12 would I be able to create a copy job on a per VM or per Backup job basis and once the copy had completed I could change the retention settings on the original job? Veeam would then remove the older files at some point after the change?

Whether VM or JOB, it looks like I would need to copy every available restore point at once. There isn’t a way for me to, for example, create a copy job to copy January’s month end backup on it’s own, and then February’s later on. Is this correct?

Many thanks, 

Yes that is correct you create a job based on VM or another job then once copied off change the retention for the original job.  A backup copy job would be ideal in this scenario.

There is no way to specify months or specific periods unfortunately to copy off it is all backup points.

Hi Chris, 

Having upgraded to V12 on one of our Veeam Backup servers for testing, I only seem to be able to do the following:

1 - Backup Copy Job (image level backup), I can select from jobs or from repositories, but even if I use exclusions to limit this to 1 VM in the job, it still looks like it will do ALL restore points it has on that VM. 

2 - Backup Copy Job (storage copy), one whole repository to another seemingly. 
 

3 - Copy Job (file or VM), where VM looks to be a live VM selected from the vCenter and file doesn’t show the storeonce repository as a source option. 

I have my Azure object storage repository set up and available in VBR V12 but I can’t see a way to copy single restore points or individual VBK files to this Azure repository. 

Am I missing something?


Hi Chris, thanks for your reply. 

So if I was to upgrade to V12 would I be able to create a copy job on a per VM or per Backup job basis and once the copy had completed I could change the retention settings on the original job? Veeam would then remove the older files at some point after the change?

Whether VM or JOB, it looks like I would need to copy every available restore point at once. There isn’t a way for me to, for example, create a copy job to copy January’s month end backup on it’s own, and then February’s later on. Is this correct?

Many thanks, 

Yes that is correct you create a job based on VM or another job then once copied off change the retention for the original job.  A backup copy job would be ideal in this scenario.

There is no way to specify months or specific periods unfortunately to copy off it is all backup points.

Hi Chris, 

Having upgraded to V12 on one of our Veeam Backup servers for testing, I only seem to be able to do the following:

1 - Backup Copy Job (image level backup), I can select from jobs or from repositories, but even if I use exclusions to limit this to 1 VM in the job, it still looks like it will do ALL restore points it has on that VM. 

2 - Backup Copy Job (storage copy), one whole repository to another seemingly. 
 

3 - Copy Job (file or VM), where VM looks to be a live VM selected from the vCenter and file doesn’t show the storeonce repository as a source option. 

I have my Azure object storage repository set up and available in VBR V12 but I can’t see a way to copy single restore points or individual VBK files to this Azure repository. 

Am I missing something?

You are not missing anything as Veeam is going to copy all required files not just single files that make up the backup chain.  Copy just individual files can be done with a File Copy job but you need to have all files that make up the chain and why Veeam copies all files.


One thing to consider is the deup / compression ratio. You could potentially move a large amount of data and still not free up much space. For example I deleted a 19TB backup chain on a StoreOnce and once the housekeeping had ran on the StoreOnce it only increased free space by 3TB.

I’ve been here too.  Shrinking retention in my jobs to free up space… I got rid of weeks of backups and ended up saving next to nothing. 

What did you end up doing in the end?

If I can find a way to actually copy the VBK files off I am prepared trim down to just keeping 2 or 3 months of retention on the StoreOnce...which when it was 3 months into production from new was only about 16TB. 

This is starting to bring back memories of the old “thin provisioned” storage volumes on a Compellent SAN we had way back. Deleting files (VMDKs mostly) made almost no difference but if we created a new Compellent volume, copied what we wanted to keep to that and then deleted the old volume all the space was reclaimed! 


One thing to consider is the deup / compression ratio. You could potentially move a large amount of data and still not free up much space. For example I deleted a 19TB backup chain on a StoreOnce and once the housekeeping had ran on the StoreOnce it only increased free space by 3TB.

I’ve been here too.  Shrinking retention in my jobs to free up space… I got rid of weeks of backups and ended up saving next to nothing. 

What did you end up doing in the end?

If I can find a way to actually copy the VBK files off I am prepared trim down to just keeping 2 or 3 months of retention on the StoreOnce...which when it was 3 months into production from new was only about 16TB. 

This is starting to bring back memories of the old “thin provisioned” storage volumes on a Compellent SAN we had way back. Deleting files (VMDKs mostly) made almost no difference but if we created a new Compellent volume, copied what we wanted to keep to that and then deleted the old volume all the space was reclaimed! 

Yup. Time consuming way to reclaim space on the SAN.  Another reason why I always stack up new servers on new volumes rather than in place upgrades. 

 

Every so often I’ll create new volumes and datastores and migrate and it’s amazing how much space you can reclaim, however I use thin on the SAN and Thick in VMware these days. Datastores never “fill up” still save the space and can over provision in one spot without having to monitor both.

 

The solution is to buy more storage and size your environment properly. 

 

How much data are you backing up, what are your retention policies. Find that out, calculate the total, add the appropriate size for growth. There are calculators online to find out your total GFS size too. I do this and THEN I’ll choose if I want to dedupe/compress/etc. The space savings is a bonus. I never bank on that where I work as it’s mostly video and images where I get awful compression. Nearly 1:1 in my environment which is extremally large.

 

 


We generally just add a shelf to the StoreOnce within reason. We have a 5250 in each of our DC’s for IaaS backups and the if we go past say 3 or 4 shelves we will look to add an additional controller as well so we scale compute for the storage as wel.


Just adding a shelf will see us in the same position (well, worse actually because there will be even more data stuck on StoreOnce) in about a year or so. 

I suppose my main issue here is the way we’ve setup the jobs, too many VMs in each and too many restore points make them too big to move in one go (per job basis) to a cloud repository using a slow link. Should have got the cloud repositories from the start and then wouldn’t be faced with huge upload times before freeing space!
 

I suppose what I’ll have to do is create a new repository using DAS or NFS, something on the LAN, then do a copy job to there. Once it’s there I can change the retention on the storeonce repository, hopefully reclaim some of the space and then probably split the jobs out into smaller ones with new scale-out capacity tier repositories or new schedules copy jobs to Azure. In the meantime I can upload from the DAS/NFS repository to Azure using a different Veeam server….or copy it to an Azure data brick and ingest it that way. 

Live and learn I suppose. 

One last question for anyone who might know - when selecting “Delete from disk” on a backup job or VM backup chain - how long would we be looking at for any space to be reclaimed on a StoreOnce repository? Once housekeeping tasks had run on StoreOnce? or is there further veeam tasks that need to take place as well?

Thanks all for the input so far!
 


Wonderful conversation by legends I went through the conversation and found that very helpful.


Comment