Skip to main content

Most of us love and embrace SOBRs as well as hardened Linux repos for obvious reasons.
Once in a while a SOBRs shifts out of balance because VBR cannot predict sizes of VMs and especially chains with changes.

Therefore, for a non-immutable repo we have ways to rebalance the SOBR: KB3100: Manually moving backup files between Scale-Out Backup Repository extents (veeam.com)
V12 will even bring us the conservation of space optimizations here: New in VBR v12: True per-machine backup files - vNote42

This though is and will not be an easy task for the hardened Linux repo, as it will prevent files from being moved between the extents. 

I e.g. currently face a customers SOBR with one extent having 0% free while the others all have more than 80% free...

Question is: Did you ever face this challange? How did you solve it?
(Build chains from scratch? Disable immutability, reorg files, enable immutabilty?)

That has always been a huge headache. Even with non-immutable it was a challenge, especially if this is a Cloud Connect setup with many tenants. I would think that with immutable you would have no choice but to run active fulls. If I remember correctly the active full will land on the extent with the least amount of usage. Sealing the extent of course would be the first step. I remember way back in the first Cloud Connect architect book there was a section about re-balancing which stated “coming soon” and now with V12 it will be here 🙂 . Interesting to hear what others have done with the unbalanced SOBR’s.


With v12 the rebalance is going to be huge. Right now, we just Seal an extent to ensure writing takes place on extents with more space.  If there is not enough space, we add another extent to the SOBR and then once the retention runs out on the one that is Sealed, we then unseal it again for use.  It is not the greatest way to do things but until v12 comes out it is the easiest way versus getting in to copying files, etc. especially on Cloud Connect.


I agree, currently sealing sounds like the best option, then unsealing when the retention runs out. 

 

How is V12 going to handle this if it’s immutable though? same idea? 

 

 


I agree, currently sealing sounds like the best option, then unsealing when the retention runs out. 

 

How is V12 going to handle this if it’s immutable though? same idea? 

 

 

Yes, from what I remember from VeeamON I believe as long as all extents are Immutable in a SOBR it can rebalance and move things around, etc. in v12.


Yes, from what I remember from VeeamON I believe as long as all extents are Immutable in a SOBR it can rebalance and move things around, etc. in v12.

I don’t think VeeaMover will be able to move or rebalance anything under active immutability. This would breach the immutability right away.

And the active end of the chain will alway be the immutable part of it.

Only option here would be to disable immutability directly on the Linux box (as root) and then move and mess around… ;)

So sealing will be the only easy way to go. Of course with creating new fulls without space optimization on another extent.


Yes, from what I remember from VeeamON I believe as long as all extents are Immutable in a SOBR it can rebalance and move things around, etc. in v12.

I don’t think VeeaMover will be able to move or rebalance anything under active immutability. This would breach the immutability right away.

And the active end of the chain will alway be the immutable part of it.

Only option here would be to disable immutability directly on the Linux box (as root) and then move and mess around… ;)

So sealing will be the only easy way to go. Of course with creating new fulls without space optimization on another extent.

My mistake on that one yeah that makes sense.  So sealing is the only way there.

Also, if you have XFS configured on the SOBR even with Immutability the space savings should work with an active full.


My mistake on that one yeah that makes sense.  So sealing is the only way there.

Also, if you have XFS configured on the SOBR even with Immutability the space savings should work with an active full.

Unfortunately not. Active fulls never give you space savings on any fast-cloning file system. Only a dedupe appliance can dehydrate active fulls.

Even synthetic fulls won’t give you savings with a synthetic full if you sealed the original extent. They will generate the next full independently on another extent.

No savings between extents. The magic only lives independently within each of them.

So the sealing method will produce a lot more steam. For larger and space limited environments I would opt for disabling immutability occasionally and use VeeaMover once we have it.


it sounds it could be a great question for Veeam 100 summit @Chris.Childerhose @Michael Melter 

Sealing seems to be the best options at the moment as written above.


it sounds it could be a great question for Veeam 100 summit @Chris.Childerhose @Michael Melter 

Sealing seems to be the best options at the moment as written above.

Yes, I think I will try to bring this up at the Summit. 😁


Hello, I'm reviving this topic because I'm facing a specific problem: I have an immutable repository that is approaching saturation. When I add a new extent, my data will always want to write to the first repository due to the data locality policy. I was considering rebalancing, but according to the documentation, immutable data seems to be ignored.

The Seal method doesn't seem viable to me because ultimately, I'll end up facing the same problem unless I add multiple extents from the beginning.

I would have liked to be able to move the data while preserving the immutable attribute, but I haven't found a way to do this.


What about adding another VHR and using the Move Backup feature versus trying to do it via a SOBR?  Might work better than SOBR rebalancing, etc.


Hm note sure it will work in my case, cause the SOBR is used for Cloud Repo

 


Hm note sure it will work in my case, cause the SOBR is used for Cloud Repo

 

Ok.  Was just a thought and you might be limited to adding another VHR and then just let Veeam do its thing even with the Locality policy.  Might be something to ask on the Forums or with Support.

 
 
 

Yep I have open a topic on the forum waiting for an answer 
 


Comment