Skip to main content

This issue I’ve seen with larger SOBRs having many ReFS extents. Having a backup-copy job (pruning) with GFS there were sudden unintended switches of backup chains to other extents. Thus loosing all space efficiency for the GFS points. We used scripts here to detect chains spanning more than a single extent. It turned out to be a bug solved in V10 paired with a very conservative process in VBR to detect extents near space exhaustion. There is some estimation here due not being able to estimate the space savings due to block recloning.

 

This could be tweaked with two regkeys:

Pfad: HKLM\Software\Veeam\Veeam Backup and Replication

Name: SOBRSyntheticFullCompressRate 

Value: 35, DWORD

Description: override the estimated space VM would take on SOBR in a full backup. % of previous full backup size. 

 

Name: SobrForceExtentSpaceUpdate

Value: 1, DWORD

Description: enables advanced SOBR extents free space update logic. With this set to 1 (enabled) service updates cached extent free space every time task is assigned.

 

Maybe it helps.

Regards,

Mike

Thanks @Michael Melter for sharing! Are these reg-keys exists since v9 or v10? What is the default behavior in v10? I guess this will be controlled with these keys too?


Hi @vNote42. Actually they exist since V9 or even earlier already. To my knowledge they are still valid. Some time ago I was taking part in a forum thread were Anton talked about plans to change the placement strategy in SOBRs from ground up in V11: https://forums.veeam.com/veeam-backup-replication-f2/sobr-placement-policy-disregarded-for-gfs-t68404.html

We’ll find out in a week or so… :wink:


Thanks for sharing @Michael Melter  - and that was a question I had as well @vNote42  on the version. I also found this related KB:

https://www.veeam.com/kb2282

I’d always recommend testing configuration like this - but this is hard to do with multiple large ReFS extents. 


Thanks @Rick Vanover for sharing the KB article. Just to get it straight: the “new” method mentioned there (in 2017 already) is not the overhaul Anton kind of announced for V11. He was merely referencing the method to generate GFS points in backup copy chains I understood.

Testing those large repos would be great I fully agree. We already did so multiple times in production and I know “it just works”. :grin:

Besides those minor issues mentioned here, those constructs run just great in many of our customer environments we designed during our projects. I’m a big fan of the “hyper converged backup storage” I usually call it: 1-n industry standard servers all acting as DAS-ReFS-repos and maybe even as direct storage access proxies as well. Scalable LAN free backups - only limited by the speed of the HBA.


Fair shout @Michael Melter  - and one thing to note is that in V11 there is an overhaul. Rin on my team has a blog planned on this topic for V11 and I’m going to link it to the “how to upgrade” guide as well as the key changes in the Helpcenter docs.


The one thing I guess I want to also share is for the readers, to not take registry keys as generic miscellaneous hidden enhancements. But instead for specific scenarios, they can address behavior for exacting configurations. If that makes sense.


Comment