Skip to main content

Curious what y’all are doing in this sort of situation. 

 

I have a client with a rather large VM (95 TB, consisting of VMDK’s at 100GB (OS), 30TB, 19TB, 20TB, 9TB and 17TB).  The repo (Synology NAS) is about 90TB of usable space with a second NAS at 24TB. The volumes are ISCSI presented as RDM’s to the virtual Veeam server running 2012 R2.  Volumes are formatted REFS.

The backup job for this server is split into three jobs...one for the OS disk and the 9TB and 30TB disks (active data) to the primary NAS, one for the 19TB and 20TB disks (Archive Data) to the Primary NAS, and one for the 17TB disk (Archive Data from an old location) to the secondary NAS).

Current backup jobs are forever forward incrementals keeping 31 days of restore points, but as such, we have defragments and compactions running, but they take a VERY long time (the first mentioned job has been running for almost 16 hours and is at 6%, and that’s with only 3 restore points due to an issue I fixed yesterday where the volumes were showing as RAW due to a bad patch that the assigned engineer just noticed after weeks of failures.  Given the size of the VM, we don’t have a ton of space to play with on either multiple fulls or with the compaction process.  I believe we had either forward or reverse incrementals running before (non-forever forward) but I think we ran into issues with filling up the repo.

Just curious what strategy you’d take to make this better.  We are using REFS, but it feels like we’re not necessarily getting a lot out of it in regards to block-cloning.  Otherwise I’d consider turning on synthetic fulls.  Are we at a limitation due to 2012 R2 perhaps?  Any other ideas on how to make this more efficient?

Ultimately, I’d like to get a new, purpose-built Veeam server on site with lots of local storage, but the NAS they’re currently using is only 2 or 3 years old, so could be a hard sell.  We can repurpose it for sure for a second copy or something like that as I don’t like using the NAS, for sure not with REFS on it.  Any ideas of what we can do for the time being?

Hi, 2012 R2 has no fast clone, so you won’t be leveraging benefits from that. But I use weekly synthetics with ReFS, which defeats the need to defrag & compact. As per Veeam documentation:

 

  • The Defragment and compact full backup file option works for forever forward incremental or reverse incremental backup chains. For this reason, you must not schedule active or synthetic full backups.
     

Use 2016 or newer for ReFS 🙂 gets space saving benefits to when used with forward incrementals!


Forgot to add the helpcenter link: https://helpcenter.veeam.com/docs/backup/vsphere/backup_compact_file.html?ver=110


Hi, 2012 R2 has no fast clone, so you won’t be leveraging benefits from that. But I use weekly synthetics with ReFS, which defeats the need to defrag & compact. As per Veeam documentation:

Use 2016 or newer for ReFS 🙂 gets space saving benefits to when used with forward incrementals!

 

This is what I was thinking and was afraid of.  And to confirm, if I upgrade and mount the existing volume on a 2016 or 2019 server, it should upgrade the REFS version and feature set at mount, correct?


Hi, 2012 R2 has no fast clone, so you won’t be leveraging benefits from that. But I use weekly synthetics with ReFS, which defeats the need to defrag & compact. As per Veeam documentation:

Use 2016 or newer for ReFS 🙂 gets space saving benefits to when used with forward incrementals!

 

This is what I was thinking and was afraid of.  And to confirm, if I upgrade and mount the existing volume on a 2016 or 2019 server, it should upgrade the REFS version and feature set at mount, correct?

Correct Windows will, but Veeam won’t recognise this you’ll need to re-add the repo and IIRC create new backup chains. 🙂 is your ReFS file system 64k block size?

 

I suggest either 2019 or 2022 versions of ReFS btw 👍


Yes, definitely don’t use the ReFS which comes with 2012 R2…

Mhhh, I am not really sure, if the ReFS upgrade is done without problems… I am interested in some hands-on insights as I will have to upgrade two Windows 2016 server using ReFS Repos with a lot of block cloning to Windows 2019 or 2022…

Edit:

OK, @MicoolPaul was faster 😎 This is what I am afraid of… You have to create new backup chains and will loose your block cloning savings. They will build up again, but in the first time you will need more physical disk space…


Thanks guys.  Yeah, I’d got to 2019 or 2022 if the client has the licensing.  I honestly can’t remember if they have 2019 of if we need to purchase it.  And yeah, REFS on a Synology is no longer a thing I do, but we still have a lot of them out there, and REFS on 2012 isn’t that great.  What I had read is that it is upgraded when the filesystem is mounted, but if we have to rebuild the repo and backup chains, I’d rather just reformat any way.  And yes…….64k blocks, unless my counterpart built it and ignored the warnings, but I think I have him corrected on that now.  LOL

But back to topic, so it sounds like I have the existing setup about as tweaked as I can.  I either need to deal with not much space for fulls or not much space for compactions, and really can’t do much more until I’m on a newer OS and start things over.


Yep, very chicken & egg, if you’re surviving for now, Veeam v12 will introduce VeeaMover for easier repository migration and includes the ability to reprocess the backups to benefit from block cloning on migration. We don’t know all the details until it’s GA but might be something whereby you can buy a new server with 2022 or put Linux on a new box and then migrate?


Oh yes, the VeaaMover is a very nice function… 🙂


Yeah, that’s one of the features I’m most excited for.  That and being able to move a VM between jobs.


Yeah, that’s one of the features I’m most excited for.  That and being able to move a VM between jobs.

That is very interesting, too...


Yeah, that’s one of the features I’m most excited for.  That and being able to move a VM between jobs.

Yes this and the SOBR Rebalance is going to be sweet!


Good ideas shared above, but I also don’t like the config of this VM; you’d need some time to simulate this, but I’d love to see what happened if the Veeam Agent is used to back it up, then restore a new VM with a ‘cleaner’ configuration (Agent can convert to a VM). But this would maybe take some time depending on the storage involved.  Overall a tricky scneario!


Comment