Skip to main content

Technical Report 4948 NetApp E-Series storage with Veeam Backup & Replication


The new “Technical Report 4948 NetApp E-Series storage with Veeam Backup & Replication: Reference architecture and best practices” is online!

 

 

But let's start from the beginning - how exactly did this happen?
While we often use NetApp Ontap systems for primary storage environments, there are other requirements for backup storage.

What really matters for backups:
- Reliability
- Blockstorage to get immutability through the hardened repository
- Price/performance per TB


For smaller environments, we like to use rack servers with RAID controllers and internal HDDs.
However, as soon as multiple 100 TB are required, the use of rack servers with internal discs is not always optimal, in terms of restore times, RAID expansion, etc.


A very common combination for us is therefore a 1U rack server with a SAS or FC card. The E-Series is connected "Direct Attached" to this, this combination then provides a reliable and high-performance repository.
I always call the E-Series "a better external RAID controller". And this is exactly where my problem started.


So the typical questions came from my colleagues:
- For example, what speed can I expect from 24 HDDs in the system?
- Should I rather use FC or SAS?
- Is ReFS actually as fast as XFS?


Of course, you have a personal preference and a feeling for all these questions, but I couldn't back that up with facts.
So I took a look at the TR-4948 and realised that there were only very basic performance values, which didn't help me further.


So I decided to write to the authors of the actual TR, Mitch Blackburn and Alonso DeVega (NetApp). I told them "if you provide me with hardware for testing, I'll share the benchmark results with you".
It didn't take long for the first call to take place. We quickly agreed on how we want to do this and so the E-Series was sent on its way for testing.

 

A few weeks later, the test hardware arrived and i build my temporary lab...


Comment on the picture: Most of the hardware was from our own lab, e.g. source storage, servers, switches, etc. Only the E-Series systems were provided by NetApp.

 

I quickly realised that I need to organise myself a little and created a test plan for the different scenarios to document the results.
What I completely underestimated? TIME! And so the whole thing escalated a bit and I spent a full week in our lab ;)


Here is a picture to give you a good overview of the test setup.

 

 

The main changes in the TR are:
- Updated the information to Veeam v12
- Added the chapter "Hardened Linux Repository"
- Added the benchmark results.


Detailed information on the results can be found in the TR:

https://www.netapp.com/media/79436-tr-4948.pdf


I am very pleased to have been given the opportunity to contribute to the document.
A big thank you goes to @Pybarra Pete Ybarra (Veeam), Mitch Blackburn (NetApp) and Alonso DeVega (NetApp) for their support!

Wow Matze, awesome effort ! Congrats on publishing this, very useful resource as we talked through it the last weeks and you have put a lot of effort into it!


Phenomenal work as always @MatzeB 👏👏👏
 

I recall you telling me about this, amazing to see it come to a close!

Would love to catch up and hear more about this feat!


Congratulations Matze! Amazing work, thanks for your time and effort 💪🏼👍


Chapeau, @MatzeB!

Well done and great publication. Precise and to the point.


Dang good efforts there @MatzeB . We'll done & thank you for sharing. 


Congrats on the document and the effort you put in @MatzeB 

Great job and nice read. 👍


Great work @MatzeB! it is nice to see a Veeam #community #vanguard contribute to this NetApp Technical Report. It shows the value of this program to our joint customers. @Rick Vanover @Madi.Cristil @safiya this is a good highlight💪😎


Indeed! Solid content. Great work @MatzeB  and others. 


Great work @MatzeB! Thanks for working with NetApp E-Series team on this we really appreciate your efforts. Just FYI, I will be talking about Matze testing at NetApp Insight 2024, 9/23-9/25. If you’re there, come see my session, 1021- Tackling Demanding Workloads on E-Series! I look forward to future collaboration with the Advanced UniByte team.


Great work, @MatzeB ! Thanks for sharing!


Great blog Matze!  Can’t wait to see next NetApp/Advanced UniByte collaboration.


This was a great writeup!

 

I really enjoy squeezing every ounce of performance out of my disks that I can. 

I have a disk array I'm using with about 150 disks or so.  I can max out dual 16Gb fiber Connections without the disks breaking a sweat on sequential reads. When reading and streaming to 6 LTO tapes from 30TB+ backups the random I/O creates a lot of latency.     Combine that, with jobs running, merges, replications, surebackup, etc. it’s always interesting to know what's happening behind the scenes, and why.  

 

As someone who likes to nerd out over I/O, block size, latency, queue depth and throughput, I wouldn't mind seeing those numbers the next time you do this. With restores to production, throughput is king, but it’s all relevant when talking performance numbers.

I’m about to implement a few AFA’s for backups which will be interesting to do ab it of testing on Linux vs Microsoft performance. It takes the array out of the equation a bit when it’s sub millisecond. I assume I there have more benefits from Linux repos on an AFA but if I’m bottlenecked by the network or 32Gb fiber it won’t even matter. 

 

Either way, fantastic blogpost.  Well written and I enjoyed reading it. 

 


Comment