Skip to main content

I created a SOBR in order to tier off data to AWS.  That process works perfectly.

My capacity tier setting is setup to Copy and Move.  The Move is setup for 14 days (operational restore window)

My backup job has a retention of 30 days with the Backup Repo being the SOBR

And I am doing Forward Incremental with weekly Active Full on Sunday

 

My issue is that in the Performance Tier, I only have 1 backup chain (currently there are 3 restore points Sun-Tues), not 2 as I would have anticipated with my Move policy.  My Object Storage is showing the correct number of restore points / backup chains based upon the 30 day Retention I have setup on my job.

What do you think I am doing work with this configuration?  I am looking to have 1 complete backup chain plus current week, so 7 + 7 in my performance tier.

What am I doing wrong that I am not getting my 14 days on Performance tier?

 

Thank you everyone for your support

 

In order to properly tier off backup chains you need to ensure you set the job up correctly that is targeted at the SOBR.  What is the job settings?  Is it Forever Forward, are Synthetic Fulls or Active Fulls enabled? Is it Reverse Incremental?

That plays a part in the offload process.  See here - Moving Backups to Capacity Tier - User Guide for VMware vSphere (veeam.com)


Also see here as well - Backup Chain Detection - User Guide for VMware vSphere (veeam.com)


 Is it Forever Forward, are Synthetic Fulls or Active Fulls enabled?
​​​​​

@Chris.Childerhose

He told us about it:

And I am doing Forward Incremental with weekly Active Full on Sunday

 

@Dale Johnson

It looks everything ok for me, how you have set it up.

I cannot think of anything wrong. It should be removed from the performance tier only after 14 days, and not immediately after it‘s sealed. Is it possible, that an admin has done an manual move?

 

It‘s difficult to analyze without seeing the actual environment. Do you have tried to open a support case with Veeam, you have the license and should have the maintenance contract, if you are able to implement SOBRs.

 

UPDATE:
Do you have configured the override Policy? It‘s in the capacity tier step of the SOBR. If it‘s activated, sealed chains will be moved ealier when there is no space left on the performance tier.

 

“To override behavior of moving old backups, click Override, select the Move oldest backup files sooner if scale-out backup repository is reaching capacity check box and define a threshold in percent to force data transfer if a scale-out backup repository has reached the specified threshold.“

 


 Is it Forever Forward, are Synthetic Fulls or Active Fulls enabled?
​​​​​

@Chris.Childerhose

He told us about it:

And I am doing Forward Incremental with weekly Active Full on Sunday

 

@Dale Johnson

It looks everything ok for me, how you have set it up.

I cannot think of anything wrong. It should be removed from the performance tier only after 14 days, and not immediately after it‘s sealed. Is it possible, that an admin has done an manual move?
 

It‘s difficult to analyze without seeing the actual environment. Do you have tried to open a support case with Veeam, you have the license and should have the maintenance contract, if you are able to implement SOBRs.

My bad on that - selective reading I guess.  :joy: Thanks Detective. :rofl:

Yeah everything should move as you have noted but without more details it is hard to figure out.


I have the Override setup at 10%.  I have almost 50% capacity left, so that should not cause this I would think.

I am the only Admin that actively manages the environment.

@Chris.Childerhose, I am not sure what other information that you would like to see.

As indicated in the Backup Job settings

  • 30 Days Retention
  • Advanced → Forward Incremental (no synthetic).  Weekly Active Fulls on Sunday
  • There are no Maintenance tasks setup under Advanced

SOBR Settings

  • Performance Tier - 1 extent
  • Placement Policy - Data Locality
  • Capacity Tier - Copy and Move selected, move is older than 14 days (operational restore window).  The way I understand this, I will have 14 days worth of backups in my Performance tier?
  • Archive Tier - not used.

As I am doing weekly Active Full backups, that will seal off the previous weeks chain, does that then make that chain inactive and thus slates it for purge on the Performance Tier?  I would assume that my operational restore window of 14 days would then be what allows me to ensure I have 2 weeks of backups on prem.

 


@Dale Johnson

Are you sure, that you don‘t have faced the override policy? Now you have 50% free space.

With another 2 active fulls, the space could have filled up until you have reached 90% storage usage.

With 14 days backups on performance tier, you should  calculate the storage for 2-3 active fulls. The older backup chains can only be moved, when a new active full was created. Before that, one of the backup chains is not sealed. So you have at least once in a week 2-3 active fulls on the disk. Active fulls cannot use FastClone storage savings. They need the entire space.
 

How big is an active full and how big is your performance tier?


I am quite certain that we are no where near the capacity of the Performance Tier.  The performance tier is 18.5 TB is space, there is right now 8.3TB available.

Currently I am backing up 3 servers (2 DCs and an Exchange with no mail stores that is used for O365 management).  The current in the extent is 81GB (1 full and 2 incs for each server).

I have a lot of space on that drive, I am nowhere near capacity on it. 

My AWS load is 360GB and that is 5 Fulls and 4 rounds of incs.

Actually, now that I physically look at the folder on the drive, I do see some interesting stuff.  Perhaps I will open a support ticket.


@Dale Johnson 

Thanks for confirming the space.

I think, a support case is the best way forward.

The debug logs can be analyzed by veeam support and they will tell you what is wrong.

It would be helpful for others if you could give us an update with a solution after your support case :)


Just an update to this case.  I opened a support case with Veeam support.  I worked through all the settings with the support engineer and we solved the issue.

The issue was in fact the Over-Ride policy on the capacity tier.  The wording on the “Offload until used space is below” confused me.  I had it set at 10% thinking that if I only have 10% left then offload, but it should be the other way around.  I changed it to 90% and that seems to have solved my retention issues.

It took the Support Engineer a few reads to think about it properly as well.  Just the way it was worded seems to be backwards compared to other areas of IT.  Most things are about remaining space, not used space.

Anyway, seems to be working as expected now.


@Dale Johnson 

Thanks for the feedback. I‘m glad you worked it out 👍


Just an update to this case.  I opened a support case with Veeam support.  I worked through all the settings with the support engineer and we solved the issue.

The issue was in fact the Over-Ride policy on the capacity tier.  The wording on the “Offload until used space is below” confused me.  I had it set at 10% thinking that if I only have 10% left then offload, but it should be the other way around.  I changed it to 90% and that seems to have solved my retention issues.

It took the Support Engineer a few reads to think about it properly as well.  Just the way it was worded seems to be backwards compared to other areas of IT.  Most things are about remaining space, not used space.

Anyway, seems to be working as expected now.

Glad you got it worked out.


Comment