Skip to main content

I have a situation to change the Veeam secondary old repository to Linux Hardened repository, Below are the details;

 

I have installed RHEL in DR Server

 

I need some guidelines in

 

  1. Scrips to configure the Linux Hardened Repository (RHEL 9.2)
  1. Moving the backup from old repository to new one without any impact

Hi,

 

to confirm I understand your request. You’ve got a Linux repository as your backup copy target, and you want to change this to be a Linux Hardened Repository without any impact to existing backups? Is that right?


 

  • MicoolPaul No i have and old repository where the backup copy copies backup, now i have got a new server with local storage and RHEL 9.2 installed, i need to configure this Linux server as repository for Backu Copy & move backups from old repository to here


Hi, the process for this is documented here: https://helpcenter.veeam.com/docs/backup/vsphere/backup_moving.html?zoom_highlight=move+backups&ver=120

 

I don’t know what will happen if you migrate your backups to an immutable repo with regards to them having immutability enabled. I suspect you’d need a new backup chain for that. But that’s me speculating


Hi, the process for this is documented here: https://helpcenter.veeam.com/docs/backup/vsphere/backup_moving.html?zoom_highlight=move+backups&ver=120

 

I don’t know what will happen if you migrate your backups to an immutable repo with regards to them having immutability enabled. I suspect you’d need a new backup chain for that. But that’s me speculating

I believe you are right about the immutability requiring a new backup chain. Not sure if once the backups are moved if you can set the immutable flag manually.


I have a feeling in my testing it did add the immutable flag to my backups as I migrated them to the immutable repo from a non-immutable repo…. (Using Veeamover in v12 that is)


Hi @shams - you can find all you need regarding the hardened repo at this Hub post by @Rick Vanover . You can find links to a downloadable ISO to install Linux & the Hardened Repo for you, a hardened repo script, etc. For more hardened repo reference, you can see the User Guide here.

Not sure if files will be immutable upon moving. If you have the ability to test at least 1 VM (maybe create a blank VM, back it up on your current Repo, then use VeeaMover to move it to your hardened repo then see if it is immutable), I’d recommend doing that.


I am also a fan of using the Scale-Out Backup Repository to accomplish tasks like this. In fact, this is why it was made in the very first place - to allow a backup environment to ‘scale’ to an additional repository specifically when it was full.

 

For this situation, one scenario could be to:

  • Create a Scale-Out Backup Repository (SOBR)
  • Add the existing repo to it (it should reconfigure all jobs)
  • Run a day or two make sure that much is OK
  • Add the new repo to the SOBR as a performance tier extent
  • Run a day or two to make sure that much is OK
  • Perform a seal on the original repo, which means new backup data goes to the new repo
  • Run a day to two to make sure that much is OK
  • You can either let the backups expire or do the evacuate backups

SOBR’s are not dead.  I’m still using one for my offsite copy job.  Backups are copied to a remote site NAS as the performance tier, but the capacity tier (copy only) is immutable object storage in Wasabi.  Works great, and I’d see no reason that you couldn’t use a linux hardened repo in place of the object storage.

With that said, I don’t use SOBR’s as much anymore.  They are certainly easier for balancing, but most of the time I just need a backup copy job to go somewhere and now with direct to object, it works pretty great without a SOBR.  But they are not dead, and certainly have a place.


SOBR’s are not dead.  I’m still using one for my offsite copy job.  Backups are copied to a remote site NAS as the performance tier, but the capacity tier (copy only) is immutable object storage in Wasabi.  Works great, and I’d see no reason that you couldn’t use a linux hardened repo in place of the object storage.

With that said, I don’t use SOBR’s as much anymore.  They are certainly easier for balancing, but most of the time I just need a backup copy job to go somewhere and now with direct to object, it works pretty great without a SOBR.  But they are not dead, and certainly have a place.

We definitely use SOBRs still here even with Object storage coming to our designs.  It is when they get too big, they become a problem.


SOBR’s are not dead.  I’m still using one for my offsite copy job.  Backups are copied to a remote site NAS as the performance tier, but the capacity tier (copy only) is immutable object storage in Wasabi.  Works great, and I’d see no reason that you couldn’t use a linux hardened repo in place of the object storage.

With that said, I don’t use SOBR’s as much anymore.  They are certainly easier for balancing, but most of the time I just need a backup copy job to go somewhere and now with direct to object, it works pretty great without a SOBR.  But they are not dead, and certainly have a place.

We definitely use SOBRs still here even with Object storage coming to our designs.  It is when they get too big, they become a problem.

Yeah, I don’t know how big is too big.  For most use for me, the largest capacity tier (object storage) that I have I think is around 25TB or so.  Largest performance tier is probably around 20TB or so...maybe not that large.  Although I do have a client that I’d like to roll out object storage for an off-site immutable copy, and if I were to use a SOBR for that, the performance tier would be around 90TB or so.  I get the feeling that you’re working with much larger numbers than this though.


SOBR’s are not dead.  I’m still using one for my offsite copy job.  Backups are copied to a remote site NAS as the performance tier, but the capacity tier (copy only) is immutable object storage in Wasabi.  Works great, and I’d see no reason that you couldn’t use a linux hardened repo in place of the object storage.

With that said, I don’t use SOBR’s as much anymore.  They are certainly easier for balancing, but most of the time I just need a backup copy job to go somewhere and now with direct to object, it works pretty great without a SOBR.  But they are not dead, and certainly have a place.

We definitely use SOBRs still here even with Object storage coming to our designs.  It is when they get too big, they become a problem.

Yeah, I don’t know how big is too big.  For most use for me, the largest capacity tier (object storage) that I have I think is around 25TB or so.  Largest performance tier is probably around 20TB or so...maybe not that large.  Although I do have a client that I’d like to roll out object storage for an off-site immutable copy, and if I were to use a SOBR for that, the performance tier would be around 90TB or so.  I get the feeling that you’re working with much larger numbers than this though.

“How big is too big...”

Yes, fair point. That’s why I like having a few days running in place and using Seal mode. Especially if the migration is not urgent. 


SOBR’s are not dead.  I’m still using one for my offsite copy job.  Backups are copied to a remote site NAS as the performance tier, but the capacity tier (copy only) is immutable object storage in Wasabi.  Works great, and I’d see no reason that you couldn’t use a linux hardened repo in place of the object storage.

With that said, I don’t use SOBR’s as much anymore.  They are certainly easier for balancing, but most of the time I just need a backup copy job to go somewhere and now with direct to object, it works pretty great without a SOBR.  But they are not dead, and certainly have a place.

We definitely use SOBRs still here even with Object storage coming to our designs.  It is when they get too big, they become a problem.

Yeah, I don’t know how big is too big.  For most use for me, the largest capacity tier (object storage) that I have I think is around 25TB or so.  Largest performance tier is probably around 20TB or so...maybe not that large.  Although I do have a client that I’d like to roll out object storage for an off-site immutable copy, and if I were to use a SOBR for that, the performance tier would be around 90TB or so.  I get the feeling that you’re working with much larger numbers than this though.

Yeah, we tend to have 48TB or 62TB performance tiers for SOBR and we tend to have 5+ extents per SOBR so they range from 250+ TB up.  I am trying to streamline things here to make them more manageable and easier to deploy, etc.


Yeah, I’m not surprised that your scale is much larger.  I guess I’m fortunate that I generally am not dealing with datasets that large in size or quantity.


Hi @shams -

Just following up on your VHR post. Did any of the provided comments help? If so, please mark one as ‘Best Answer’ so those with a similar question about VHRs who come across your post may benefit.

Thank you.


Comment