SOBR Designs


Userlevel 7
Badge +22

Hi Folks,

 

I have had a love/hate relationship with SOBRs. I am hoping I will be more on the love side going forward with the ability in V11 cloud connect to evacuate individual Tenants from a SOBR. I am finding that it is best to keep the number of extents down to low numbers, maybe even just 3. Also if using VMs, then one VM per extent otherwise your cpu and memory resources get divided by each extent. Keeping in mind that each concurrent task requires 1 core and 4gb of memory you don’t want this divided. Also if leveraging fast clone both on REFS and XFS you need even more memory.  What has been your experience with SOBRs?, especially for cloud connect? 


17 comments

Userlevel 7
Badge +20

Morning!

 

So I architected my company’s Cloud Connect offering and its been SOBR with ReFS all the way.

I am also of the stance that fewer extents is better if you can size those extents appropriately, more so because ReFS is going to give you the FastClone improvements and we want our customers to maximize on these as best they can.

We actually back our extents with scale-out namespaces too so that we have redundancy on our scalability.

 

It’d be good to have a chat about this at some point to see how you guys do things vs us but could be complicated as I believe we both have UK presence...

Userlevel 7
Badge +22

Morning!

 

So I architected my company’s Cloud Connect offering and its been SOBR with ReFS all the way.

I am also of the stance that fewer extents is better if you can size those extents appropriately, more so because ReFS is going to give you the FastClone improvements and we want our customers to maximize on these as best they can.

We actually back our extents with scale-out namespaces too so that we have redundancy on our scalability.

 

It’d be good to have a chat about this at some point to see how you guys do things vs us but could be complicated as I believe we both have UK presence...

As long as we keep state secrets concealed we should be ok 🙂. The good thing is that when dealing with just the generic level of knowledge then there is not much to give away. Now if you guys have some great home grown method of rebalancing SOBR’s then 🙂 we might have to send our Canadian spies over there to woo you with cheap beer etc 😉 :) 

Userlevel 7
Badge +20

Morning!

 

So I architected my company’s Cloud Connect offering and its been SOBR with ReFS all the way.

I am also of the stance that fewer extents is better if you can size those extents appropriately, more so because ReFS is going to give you the FastClone improvements and we want our customers to maximize on these as best they can.

We actually back our extents with scale-out namespaces too so that we have redundancy on our scalability.

 

It’d be good to have a chat about this at some point to see how you guys do things vs us but could be complicated as I believe we both have UK presence...

As long as we keep state secrets concealed we should be ok 🙂. The good thing is that when dealing with just the generic level of knowledge then there is not much to give away. Now if you guys have some great home grown method of rebalancing SOBR’s then 🙂 we might have to send our Canadian spies over there to woo you with cheap beer etc 😉 :) 

Had me at cheap beer...

Userlevel 7
Badge +20

When mixing ReFS and XFS it also seemed to be an issue even though Veeam documentation says it is ok. I suggest if mixing you are moving to XFS from ReFS or vice versa.  Also another thing with XFS is that I am finding it does not require quite as much memory as what ReFS does.

Userlevel 6
Badge +1

I’m also designing a new SOBR with xfs and reflink. Do any of you use high density server with a bunch of local disks like HPE Apollo? How is your setup with those servers and SOBR’s? 

Userlevel 7
Badge +8

I’m also designing a new SOBR with xfs and reflink. Do any of you use high density server with a bunch of local disks like HPE Apollo? How is your setup with those servers and SOBR’s? 


Hello, i’m doing the same. I hope it will be on this summer.

My conf:

HPE Apollo 4510 Gen10
2x Intel Xeon Gold 6252 CPU @ 2.1GHz (24 cores each)
16 DIMM x 16GB  (256GB RAM total)
4*25GB SFP
Interface 1gb cuivre ethernet for admin+bmc
58x 16TB SAS 12G HDD
2 carte m2 raid 1 128gb (OS)
2x HPE Smart Array P408i-p Gen10

 

2x RAID-60 with 128KB strip size on 2x (12+2) + 2 hot spares (768TB usable)
RHEL 7-8 XFS Reflink Enable, immutability (chattr flags)

60 HDD total (16TB)
56 HDD usable (minus 2 hot spares)
28 HDD (half) per each RAID controller

RAID60 with dual parity:
28 HDD - 4 HDD for parity = 24 HDD x 16TB = 384TB usable

SOBR with 2 extents:
2 x 384TB = 768 TB total capacity

Losable disks 6 per Raid

 

provisionning with rhel satellite and ansible. I want to push backup storage to next level like we did for other things. Servers battery breeding (duno if we can’t tell that in english hehe)

Userlevel 6
Badge +1

Looks like my config, I guess we both follow the same thread in Veeam forum :)

 

I changed the CPU to 6230R with 26 Cores, 6252 is a bit outdated an 6230R not that expensive. Or 5220R with 24 Cores. Regarding RAM 12 x 16GB = 192GB would be the optimal config (2 x 6 DIMM’s per CPU gives best performance). I’ve the 40/50GbE adapter 547SFP in my config, we get a network hardware upgrade during this summer.

Is this just one server or will you have multiple ones in one SOBR?

 

I’ll also go with xfs. Regarding RAID groups and filesystems I’m still having some headache how RAID60 will work. I know the Apollo benchmark for full backup and this will be no issue. But we have some extra load like copy and offload jobs and maybe sureback later. I thought about 2 x RAID10 which would result in less storage but faster rebuild times and more IOPS. I’m also not sure how well 52 cores will work for such a large repository server. 

Userlevel 7
Badge +8

Well seen for CPU, i will check that. for RAM it was quite the same price for us so more is never useless.

Many servers in SOBR.

i had an appliance in with 48 cores, that was ok. I Think it will be too with apolo :)
I prefer more storage, it will be a bad luck to break 6 disks at the same time. In worst scenario you can put in maintenance the extent.

Userlevel 6
Badge +1

I’d check RAM again, I got feedback from HPE that more than 2x6 DIMMs will have an performance impact. Not sure how much. How many servers will you have? I’m just curious and it’s good to have someone here to share the experience. 

Userlevel 7
Badge +8

I’d check RAM again, I got feedback from HPE that more than 2x6 DIMMs will have an performance impact. Not sure how much. How many servers will you have? I’m just curious and it’s good to have someone here to share the experience. 


I’m really interested to have your feedback from RAM.

4 servers maybe more.

Userlevel 6
Badge +1

I’d check RAM again, I got feedback from HPE that more than 2x6 DIMMs will have an performance impact. Not sure how much. How many servers will you have? I’m just curious and it’s good to have someone here to share the experience. 


I’m really interested to have your feedback from RAM.

4 servers maybe more.

I’ll still be some weeks until we get our demo system, regarding RAM performance, check this page: https://www.thomas-krenn.com/en/wiki/Optimize_memory_performance_of_Intel_Xeon_Scalable_systems#Dual_CPU_systems_with_16_DIMM_slots

Userlevel 7
Badge +20

Don’t want to derail this but anyone using the Dell PowerEdge XE7100 family? Looking at them for a SOBR, especially interested in any Dell vs HPE comments

Userlevel 6
Badge +1

Don’t want to derail this but anyone using the Dell PowerEdge XE7100 family? Looking at them for a SOBR, especially interested in any Dell vs HPE comments

 

I tried to get an offer for that, but non of our partners wanted to offer it. No idea why. We looked into Apollo 4510, Supermicro 6049 and Cisco s3260. For us Apollo was by far the cheapest model. I was surprised that it was even cheaper as Supermicro. They are no all the same, some habe more PCIe slots. But we can live with 1 x dual port FC and 1 x dual port LAN.

Userlevel 7
Badge +20

Don’t want to derail this but anyone using the Dell PowerEdge XE7100 family? Looking at them for a SOBR, especially interested in any Dell vs HPE comments

 

I tried to get an offer for that, but non of our partners wanted to offer it. No idea why. We looked into Apollo 4510, Supermicro 6049 and Cisco s3260. For us Apollo was by far the cheapest model. I was surprised that it was even cheaper as Supermicro. They are no all the same, some habe more PCIe slots. But we can live with 1 x dual port FC and 1 x dual port LAN.

Thanks for the insights, we’re primarily Dell with HPE for some options (I prefer Nimble to Dell personally for example in the storage space) so it’s good to get these insights!

Userlevel 7
Badge +8

I’d check RAM again, I got feedback from HPE that more than 2x6 DIMMs will have an performance impact. Not sure how much. How many servers will you have? I’m just curious and it’s good to have someone here to share the experience. 


I’m really interested to have your feedback from RAM.

4 servers maybe more.

I’ll still be some weeks until we get our demo system, regarding RAM performance, check this page: https://www.thomas-krenn.com/en/wiki/Optimize_memory_performance_of_Intel_Xeon_Scalable_systems#Dual_CPU_systems_with_16_DIMM_slots


Thanks for the link! I will check that.

@MicoolPaul We ‘re more into Hpe Apolo but your suggestions for Dell is interesting, thank you!

Userlevel 6
Badge +1

Seems we’re all in the same boat here - planning a SOBR with Apollo’s. Would be nice to get some real life feedback. According to HPE a lot of people are using Apollos but our Veeam contact does not know anyone. I’ll start a new thread with that is about Apollo / High Density server maybe this triggers someone to reply.

Userlevel 7
Badge +8

Seems we’re all in the same boat here - planning a SOBR with Apollo’s. Would be nice to get some real life feedback. According to HPE a lot of people are using Apollos but our Veeam contact does not know anyone. I’ll start a new thread with that is about Apollo / High Density server maybe this triggers someone to reply.


There is some users of Apolo with veeam in France, already met some but on reFS not for the moment on XFS. I think you’ll be first, it’s so long to buy something on french public market :no_mouth:

Comment