Skip to main content

Hi all,

 

I want to keep my backups for a year.
Storage disks are expensive, so I can't keep this daily, and I gonna use GFS.

 

If I want to keep 1 recovery point per week for 1 year, should I set it to “48 weeks” (4 weeks * 12 months) in the 'Configure GFS' wizard??


 

Does it make sense to assign a Monthly GFS flag if I assign a 48 Weekly GFS flags?
I'm still not sure of the best practice on how to use GFS ! ; (

 

Please, any advice would be appreciated.

Thank you.

Hi @hoon0715 -

Have you looked through the User Guide on GFS?
https://helpcenter.veeam.com/docs/backup/vsphere/gfs_retention_policy.html?ver=120

and

https://helpcenter.veeam.com/docs/backup/vsphere/gfs_how_flags_assigned.html?ver=120

It is a bit confusing, and in my experience the best way to fully understand the behavior you want is to test a couple configurations. Since it’s GFS, you have to be patient as it can take time to create GFS backup files.

As far as Best Practice with GFS...well, there isn’t any best practice for GFS that I’m aware of.

At the very least, I think what I’d do is configure 4 weekly’s, then 12-monthly’s and 1 yearly for GFS. This way you won’t have “bulk” (larger) backup files. You can utilize Fast Clone (explained below) to help with storage space cuz data would already exist on disk.

The one storage configuration I *highly* recommend when you create your Repository storage for any Jobs you run, especially with GFS configured in it → Veeam Fast Clone technology:
https://helpcenter.veeam.com/docs/backup/vsphere/backup_repository_block_cloning.html?ver=120

What this does in a nutshell is help you best utilize storage space by not creating multiple copies of duplicate data. If a backup requires a block of data to create its file, and the block of data is already on the storage, Veeam will not create another copy of this data but instead use pointers to the data already existing on disk. With GFS configured this is a a great space savings storage technology since GFS files are always full backups. To use Fast Clone, you must configure your Repository storage to use XFS on LInux or ReFS in Windows.

Hope this helps.


This is a common question. The answer is, there isn’t really a ‘best practice’ here, as it is different for everyone/all businesses. Your own policies - whether they be formally approved published policies, or just ‘what you tell everyone you do’ - will govern what these need to be set to. 

Typically, when I work with my customers, I would ask how often this data needs to be accessed.

  • Do you get frequent requests for restore out of longer-term storage? 
  • Is there any specific reasons (policy, governance, legislative) that requires this amount of recovery points?
  • What would happen if - say - you couldn’t recover a specific file version from 47 weeks ago? 

As you mention, disk is expensive, however, the biggest impact to disk storage is typically the daily incremental backups (obviously after the first full!) - so again, you may want to consider keeping fewer daily incremental backups, but again, within respect to your recovery requests you receive.  You could, for example, keep 24 weeks of weekly fulls, and then 6 monthly fulls after that. You still have 30 more recovery points on top of your incremental backups, and are meeting your yearly requirement.

Further to this, you may also consider using separate policies for different data (if applicable). I don't know your specific use case, but you could set up a ‘gold’, ‘silver’, ‘bronze’ retention policies, apply these to different jobs and map target systems to those jobs. 

Hopefully some of these ideas give you what you need. But if I were you, I wouldn’t worry about best practice, and try to figure out what you need. Dont forget the Veeam sizing tool is publicly available, and can help you do the calculations on what will save your disk capacity: https://calculator.veeam.com/vse/

Let us know how you get on - always good to see real life use cases and solutions.

 


@hoon0715  Was this question posted twice?

I just replied on the other thread, but then saw this!


@hoon0715  Was this question posted twice?

I just replied on the other thread, but then saw this!

just removed the duplicate topic and moved your answer here, John! @jsb00227 


@hoon0715  Was this question posted twice?

I just replied on the other thread, but then saw this!

just removed the duplicate topic and moved your answer here, John! @jsb00227 

 

Thank you - I might be echoing some of the comments that @coolsport00 stated - sans the excellent recommendation around Fast Clone which, yes, would absolutely also help with reducing disk capacity usage (basically filesystem-level deduplication, in all but name). Hopefully this is the insight OP needed!


Thank you for the assist @safiya 😊

@hoon0715 … did we help you out? Did you have any further questions about anything?


Hi @hoon0715 -

I was just following up on your post. Do you still have questions?...if so, don’t hesitate to ask. If not, we ask you please mark one of the provided comments as ‘Best Answer’ so others with a similar question who come across your post may benefit.

Thank you.


Comment