Skip to main content

I swear I’ve seen some posts about this but can’t find them.  I haven’t dipped into the Linux Hardened repo’s with immutability yet, but one of my engineers recently set one up for a client, but as a copy repository.  I’m putting together a proposal to replace our internal Synology NAS that is our primary repository with a Dell PowerEdge R550 with a fair amount of local space (8x 8TB disks in RAID 5 or 6) connected via 10Gb or 25Gb links. Backup data is currently copied to a SOBR at a remote site that is using a Synology NAS and Wasabi for immutable object storage.  Right now the primary site is keeping 30 days of Forward Incrementals with Active Full’s with no GFS retention, but I plan to ramp up some more retention locally at some point as I ran into space issues a few months back and ended up doing all of my GFS work on the copy repo at the remote site for the time being.

Any gotcha’s I should be looking out for this sort of configuration - primarily in regard to using a hardened box with immutability as the primary repo?

Only issue you may come across is reporting for clients.  That is the biggest thing we have hit but other than that we are slowing moving to XFS Immutable repos to even replace ReFS ones.  Will be easier when v12 comes and VeeaMover.  😁


Only issue you may come across is reporting for clients.  That is the biggest thing we have hit but other than that we are slowing moving to XFS Immutable repos to even replace ReFS ones.  Will be easier when v12 comes and VeeaMover.  😁

 

Yeah, I’m excited for VeeaMover, but I’m going to need to create space.  I have a fair amount of clients that will need to move from Per Job to Per VM backup files. Fortunately, I don’t have to worry about reporting because I don’t host client data…the data is all stored at each client site on their own hardware.

As for my copy job repos, when converting from REFS to XFS, I need to land the data somewhere temporarily until I can move it back…although I have some loaner hardware for that, it could work…  Actually, it’s easy for those I have going out to Wasabi, because I should be able to I believe clear out the performance tier, and redownload back from Capacity tier, but I haven’t quite worked that all out.  And actually, maybe I don’t need to have the data sitting in the performance tier and could just move it all to capacity tier, wipe the performance tier, create a new tier, and then just let the data filter all back out there, and if we for some reason need it back, we can download just that info.  Interesting thought…


You can certainly use the hardened repository as the primary backup repository. Just be aware that (at the moment) there are some limitations:

Depending on your policy and RTO target, you can migrate your existing backups or start from scratch for the next 30 days. But you’re also right, that if you wipe the performance tier, a rescan will download missing backups back from the capacitiy tier; not sure if you can prevent that. I would just suggest, that you move from active full backups to synthetic full backups to benefit from the XFS integration.


@regnor

Move Policy will work and can be configured. With a few exceptions to remember:

  • Short Term retention restore points will not be moved if they are within the immutable period. They will be only copied. When the immutable time period is over, they get deleted on the hardened repository.
  • When the move policy is enabled on a SOBR, GFS restore points will only be immutable for the specified immutable time period in the setting of the hardened repository. They can be deleted after this period.
    When the move policy is disabled on the SOBR, GFS restore points are immutable for their entire lifetime.

https://helpcenter.veeam.com/docs/backup/vsphere/hardened_repository.html?ver=110#retention-scenarios


@Mildur I would say we’re both right, but technically your answer is better then mine 😉


You can certainly use the hardened repository as the primary backup repository. Just be aware that (at the moment) there are some limitations:

 

Sorry, yes, those are different repo’s that I’m scoping out right now...currently the hardened repo will be the primary repo, but the offsite repo that’s part of a SOBR is running through a Windows repo server/proxy server, but that could change down the road.

 

Depending on your policy and RTO target, you can migrate your existing backups or start from scratch for the next 30 days. But you’re also right, that if you wipe the performance tier, a rescan will download missing backups back from the capacitiy tier; not sure if you can prevent that. I would just suggest, that you move from active full backups to synthetic full backups to benefit from the XFS integration.

Yes, once I’m no longer using a NAS as the backing storage and am using local disks, I’ll plan on changing to synthetic fulls rather than active fulls.

Thanks!


@regnor Be Careful with synthetic full 🙂, this can lead to surprises.

When you have storage that is very powerful in sequential but slow in random.

But the dedup or block cloning as time goes by creates a randomization of the I/O.

The problem comes from the random on mechanical disks, the slow seek time comes from the heads on the disks...

This problem is partially solved with ssd/nvme.
This problem can be applied to read actions (offload, copy to tape, restore).


Comment