Question

Rebalance large SOBR in V12


Userlevel 7
Badge +8

Hello guys!

I’m considering to rebalance a large sobr with v12, any feedback from the field?

How many times it tooks? What kind of performances we can expect? Limitations from tasks handle by the repo?

I’m trying to estimate the duration in order to anticipate and communicate the current action in production :)

Have a great monday!


11 comments

Userlevel 7
Badge +20

Hello guys!

I’m considering to rebalance a large sobr with v12, any feedback from the field?

How many times it tooks? What kind of performances we can expect? Limitations from tasks handle by the repo?

I’m trying to estimate the duration in order to anticipate and communicate the current action in production :)

Have a great monday!

The timing will be dependent on many things like size of data, network speeds, storage types, source/target repo type (ReFS/XFS), etc.

Keep in mind it also puts the SOBR in maintenance mode and you cannot use it so plan well.

Best of luck and let us know how it goes.  We have not done this as yet since we just finished all our updates to v12 last week.

Userlevel 7
Badge +8

Also tempted to use this in some medium sized environments.

I wonder if one could stop the process once kicked off.

So, if it turns out that it’ll take 8+ weeks, one can revert to normal ops. 😉

Userlevel 7
Badge +6

Following for the followup.  Any SOBR’s I have are just for copy jobs and have one performance tier and one capacity tier, except for in my lab where there’s isn’t enough data to speak of.  But I’d be curious to see what the results are.

Userlevel 7
Badge +8

I don’t know why @Michael Melter is marked Best answer but anyway, here is a link to the documentation.

Rebalancing Extents of Scale-Out Backup Repositories - User Guide for VMware vSphere (veeam.com)

Userlevel 7
Badge +6

@Madi.Cristil @safiya can you reset the best answer on this please so we can correct?

Userlevel 7
Badge +22

I don’t know why @Michael Melter is marked Best answer but anyway, here is a link to the documentation.

Rebalancing Extents of Scale-Out Backup Repositories - User Guide for VMware vSphere (veeam.com)

I think it depends on what your Monthly plan is. If you have premium you get the Best answer :) 

Joking aside having the SOBR in maintenance mode is rough if you are dealing with CC, better to disable the tenant then use data mover? I think that can be done from my brief incursion into the what’s new. I am doing an “ad hoc” read of the latest docs but have not gotten to that part yet. 

Userlevel 7
Badge +20

More than likely an accidental answer due to being first before the like.

Userlevel 7
Badge +8

Veeam Community Recap #118 | Veeam Community Resource Hub

Answer from Hannes on the last recap. “If you have a large SOBR, don’t do it”

Userlevel 6
Badge +7

Agreed - Personally, I would assess the situation first, trying to find the best strategy (e.g. moving multiple smaller backup chains can be better than moving a single giant one).

I would use the “VeeaMover” feature in v12 to move backup chains across extents, without putting in maintenance mode the whole SOBR. (EDIT: my initial suggestion is not applicable as VeeaMover cannot currently move backups across extents in a SOBR. Thanks @Mildur for the correction)

You could use “the old way”: manually move entire backup chains across extents, then perform a Rescan of the whole SOBR. There are a few important caveats though:

  • No Fast Clone (XFS/ReFS) space savings - moved backups will be “inflated” to their nominal size!
  • Make sure no jobs related to the backups you are moving will run, until you complete the move AND the rescan (disable jobs / tenants)
  • It is a 100% manual process that VB&R is “not aware of”. Caution is advised
Userlevel 7
Badge +22

Agreed - Personally, I would assess the situation first, trying to find the best strategy (e.g. moving multiple smaller backup chains can be better than moving a single giant one).

I would use the “VeeaMover” feature in v12 to move backup chains across extents, without putting in maintenance mode the whole SOBR. (EDIT: my initial suggestion is not applicable as VeeaMover cannot currently move backups across extents in a SOBR. Thanks @Mildur for the correction)

You could use “the old way”: manually move entire backup chains across extents, then perform a Rescan of the whole SOBR. There are a few important caveats though:

  • No Fast Clone (XFS/ReFS) space savings - moved backups will be “inflated” to their nominal size!
  • Make sure no jobs related to the backups you are moving will run, until you complete the move AND the rescan (disable jobs / tenants)
  • It is a 100% manual process that VB&R is “not aware of”. Caution is advised

Hence my memory of doing active fulls instead of all the gymnastics unless the chain was relatively small. What I can’t remember now is whether or not performing a synthetic full would then allow you to leverage fastclone on the backups going forward or not?

Either way the re-balance feature is not the saving grace that I had once hoped for. Careful SOBR planning is key. My opinion, especially for Cloud Connect, minimal number of extents. Perhaps one real extent and one emergency “suddenly ran out of space extent”. The latter I saw happen with GFS and Monthly archives (mind you there was no capacity tier so that would have avoided the issue), first Months GFS very small and then one Month boom!! due for whatever reason a major difference in blocs appearing between that Month and the original source vbk. At least that is what I think I remember seeing. 

Userlevel 6
Badge +7

To my knowledge, going forward Fast Clone can be leveraged without problems - it’s just not “carried over” across extents or repositories when you move data without VeeaMover.

I agree that re-balancing a SOBR is not something to be taken lightly. I also agree it’s generally better to have a limited number of fairly big extents, instead of a high number of smaller ones.

To be fair, space balancing has never been a feature or a goal of the SOBR, especially in the “Data Locality” mode. I agree that it “looks nicer” to have an even distribution of free space across extents, but I’d prefer fast and efficient backups 😉

Comment