Skip to main content

Hi Team

We have a challenge with one of our customers .

They have requested for a monthly full backup to be stored for 12 month retention.

They had requested for the same to be stored on cloud for the same considering onpremise storage space .

We have added an blob object storage from Azure and created a SOBR with the same as capacity tier and onpremise performance tier. We have configured the data to move after every 5 days from performance to capacity. Configured a monthly full backup with 12 restore points to be run on the last day of the month.

Movement of backups donot happen and currently 2 full backups have happened and have stored on the local onpremise repository.

Support initially stated that we would need to wait for 2 backups for the previous backup to move but this has not happened.Now they have stated the configuration should be part of a GFS configuration.

Nothing substantial interms of configuration has been provided clearly .

Need some help or recommendations for the same .

Note : Customer already has an existing backup chain onpremise for the same with incremental backups happening and this requirement to store on cloud is purely from a compliance standpoint.

They had requested for the same to be stored on cloud for the same considering onpremise storage space 

 

The move policy is not the correct offloading type, if they want to store the same in the cloud.

Why not using just copy offload? Only then they have the same backups onpremise and in the cloud.

Move policy will remove the restore points from the performance tier. That‘s not the goal of the customer, if I understand your request.

To use the move policy with a forward incremental backup chain, the job must create regular fullbackups or the chain cannot be moved. If you never create a full backup (forever forward incremental), then backup files cannot be moved.


Morning!

 

To confirm, have you seen the helpcenter documentation? Should answer a lot of questions:

https://helpcenter.veeam.com/docs/backup/vsphere/capacity_tier_move.html?ver=110
 

Support are correct that only an inactive backup chain will be offloaded (moved) to object storage, hence you had to wait for your second full backup for anything to happen, though I understand you’re saying it still hasn’t happened yet.

 

Can you share the retention policy and GFS settings of your backup job?

 

Also, what happens if you attempt to manually move the backups via this process:

https://helpcenter.veeam.com/docs/backup/vsphere/moving_to_capacity_tier.html?ver=110


They had requested for the same to be stored on cloud for the same considering onpremise storage space 

 

The move policy is not the correct offloading type, if they want to store the same in the cloud.

Why not using just copy offload? Only then they have the same backups onpremise and in the cloud.

Move policy will remove the restore points from the performance tier. That‘s not the goal of the customer, if I understand your request.

To use the move policy with a forward incremental backup chain, the job must create regular fullbackups or the chain cannot be moved. If you never create a full backup (forever forward incremental), then backup files cannot be moved.

-- The customer wants to store 12 Monthly FULL backups and Yrly full backups for 10 Yrs .

-- He does not have space on the repository therefore is expecting to store the same on cloud .


Morning!

 

To confirm, have you seen the helpcenter documentation? Should answer a lot of questions:

https://helpcenter.veeam.com/docs/backup/vsphere/capacity_tier_move.html?ver=110
 

Support are correct that only an inactive backup chain will be offloaded (moved) to object storage, hence you had to wait for your second full backup for anything to happen, though I understand you’re saying it still hasn’t happened yet.

 

Can you share the retention policy and GFS settings of your backup job?

 

Also, what happens if you attempt to manually move the backups via this process:

https://helpcenter.veeam.com/docs/backup/vsphere/moving_to_capacity_tier.html?ver=110

-- So i have not attached an archive tier to this SOBR . Its just the performance and capacity extent at the moment.

Backup job is configured for active fulls and this should lead to no chain getting created since every backup is an independent VBK file.

These i am hoping to move to capacity tier which has not occurred so far.

NO GFS has been configured for the Job or even at the repository level . I am trying to figure out the easiest way of moving full backups alone to cloud which are getting created by this job.

Requirement as mentioned is the below :

>12 Monthly fulls to be stored on cloud

>10Yrly fulls to be stored on cloud 

What would be the best way of creating these jobs .

I have not attempted to manually move backups to capacity tier though the option does show up when we RC on the backups for the following job. Hoping this will be automated.


@karun.keeriot youre doing active full backups every day? Why is this?

 

You would save huge amounts with ReFS/XFS file system and synthetic full backups leveraging fast clone, and then you wouldn’t have a huge amount of redundant blocks.

 

What is your retention policy on the backup job?

 

To get the retention you want, you need GFS a configured for those parameters (12 monthly backups & 10 yearly backups)

You also need your retention period for your normal backup job to be long enough that data is getting offloaded to the object storage, otherwise you’d need to configure the “copy to” mode also for immediate backup offload


The offloading is not happening because the previous chain is not yet inactive. It’s good that you are using a periodical full backup (in your case active full) but the full retention needs to be achieved on the new chain.

 

Let’s say you have configured 7 days retention. Then you need the new full backup plus 6 days of incremental before the previous chain can be “moved” to the cloud.

 

You didn’t share too many details, so I’m going to put down a few suggestions without knowing if they can be used:

  1. If using a server repository, make sure you change the drive to use ReFS or if Linux XFS. This will allow fast cloning.
  2. When done with item 1, change the job from active full to synthetic full to save local disk space.

@MicoolPaul  -

Hi Mike , the synthetic full was initially selected but customer has requested for an active full maybe from a compliance standpoint for data integrity.

Retention policy on backup job is 12 restore points not days .

So we had done the configuration this way inorder to have separate jobs for monthly fulls and a separate job for yrly fulls..Seems odd but that is where we are .

If we do select the COPY to mode we will have 2 copies from my understanding on performance and capacity which we dont want.We just need the offload.

At the moment we have 2 full backups..

So our team has taken screenshots of the configuration.

I could share the same i you can provide your email ID pls .


You need to drop those extra jobs and configure GFS on the primary job.


Hi Team

Sorry to bother you guys  and thanks for all the inputs so far.

So i had a chance to take a remote of our customer site and what we observed is the 1st FULL Backup have moved to capacity tier which means a tiering session has occurred leaving a VBK file of few MBs on the disk. Actual latest Full backup( 25GB) which was created on 31st Aug is still on performance tier.

When i try to perform a restoration for the job i can see the 1st restore point placed on capacity tier and only a since VBK is residing on the performance tier.

So currently just to provide a background of the same .

We have configured a FULL backup job to run on the last day of the month with 12 restore points as the retention period. Backups are configured to a scale-out repository with the option to move to capacity tier 5 days after the job completes override if disk utilization is beyond 70%.

Based on my understanding the GFS configuration will move backups down to the archive tier and will not stay on the performance tier. Customer is fine not to place the same under a GFS configuration considering he does not want to interfere with the existing production backups which are already performing a weekly archive to tape storage.

Would be glad to take your inputs on why the latest FULL is still on disk while the older FULL has been moved to cloud.

Also if i am to move from this to a GFS configuration i could enable GFS for the same job reducing retention period from 12 restore to make 1 or 2 restore point and enable GFS for 12montly and 10yrly. Does that make sense..


 

 


Hi Team

Sorry to bother you guys  and thanks for all the inputs so far.

So i had a chance to take a remote of our customer site and what we observed is the 1st FULL Backup have moved to capacity tier which means a tiering session has occurred leaving a VBK file of few MBs on the disk. Actual latest Full backup( 25GB) which was created on 31st Aug is still on performance tier.

When i try to perform a restoration for the job i can see the 1st restore point placed on capacity tier and only a since VBK is residing on the performance tier.

So currently just to provide a background of the same .

We have configured a FULL backup job to run on the last day of the month with 12 restore points as the retention period. Backups are configured to a scale-out repository with the option to move to capacity tier 5 days after the job completes override if disk utilization is beyond 70%.

Based on my understanding the GFS configuration will move backups down to the archive tier and will not stay on the performance tier. Customer is fine not to place the same under a GFS configuration considering he does not want to interfere with the existing production backups which are already performing a weekly archive to tape storage.

Would be glad to take your inputs on why the latest FULL is still on disk while the older FULL has been moved to cloud.

Also if i am to move from this to a GFS configuration i could enable GFS for the same job reducing retention period from 12 restore to make 1 or 2 restore point and enable GFS for 12montly and 10yrly. Does that make sense..

 

 

I believe the last full backup is on active chain, so will not be moved until a new full backup is created.

Veeam cannot move a full backup for an active chain because the incrementals depends of the full to work.


Comment