Skip to main content

Veeam 100 Show episode 15-Veeam Benchmarking Done Right

Veeam 100 Show episode 15-Veeam Benchmarking Done Right
Madi.Cristil
Forum|alt.badge.img+8

Have you watched the latest episode of the Veeam 100 Show ? If not you can rewatch it in here: 

In a #backup environment, repository performance is a critical factor that shouldn’t be overlooked. While high throughput is great for any backup administrator, there’s more to consider—especially when disaster strikes. Speed becomes essential to meet SLAs.

In this technical deep dive, we’ll explore repository benchmarking best practices, answering key questions like:
🔹 Is ReFS or XFS faster?
🔹 iSCSI, FC, or SAS—what performs best?
🔹 How do the number of HDDs or LUNs impact performance?

This session will cover how to measure performance, the impact of disk parallelism, common pitfalls to avoid, and key lessons learned to optimize your backup storage in real-world scenarios.

Finally, we’ll dive into how LVM can enhance a Linux-hardened repository, offering both flexibility and a final performance boost! 

Great to have ​@MatzeB and ​@Andrew.Zhelezko  in the show!

8 comments

Chris.Childerhose
Forum|alt.badge.img+21
  • Veeam Legend, Veeam Vanguard
  • 8603 comments
  • March 19, 2025

This was a great episode and had some good tips for doing benchmarking.  Highly recommend watching. 👍


matheusgiovanini
Forum|alt.badge.img+5

That was a really great episode!


MatzeB
Forum|alt.badge.img+5
  • Veeam Vanguard
  • 75 comments
  • March 20, 2025

Thank you all - was nice to be part of the session. In the next months i will work on this topic and then maybe we get a Round Two of this session ;)🎉


coolsport00
Forum|alt.badge.img+20
  • Veeam Legend
  • 4190 comments
  • April 1, 2025

As I was gone on vacay...I’m really glad this was recorded so I could follow back around and watch it. Really good session here ​@MatzeB . Although, as a long-time iSCSI user, I disagree with the recommendation to not use it. I think the choice between either FC or iSCSI is more of a personal preference. The only suggestion should be to have a dedicated network for your storage...and that makes sense (& you do mention that). I haven’t used FC in over a decade, but when I did I found it more complex than iSCSI. Just my 2¢.

I found it interesting increasing disks didn’t increase speed/perf, until you got to 60. Also, the storage I use is Nimble, and with this array type, there is no RAID/LUN creation...which I like. All you need to do is chop out a block of storage you want (with strip size) and be done. So I guess this would be analogous to the multiple LUNs topics you cover. Good point to remind folks to use multiple Proxies. As one who does not do this (not really needed, and for my env.. more complexity than I want), I sometimes forget more Proxies is better vs less and bigger. I use LNX Proxies so am curious what your tests come out for those. Looking fwd to that follow up session 😊

I really found this informative Matthais. Thanks!


Scott
Forum|alt.badge.img+9
  • Veeam Legend
  • 1012 comments
  • April 1, 2025

While there are benefits to both, I’d say the biggest one for ISCSI is cost if you are sharing switches and back end equipment. Another one is bandwidth if you are looking at 200Gbps - 800Gbps ports. FC will likely hit 256Gbps next year, My 32Gbps fiber network seems slow 😆

The biggest benefit i find for FC will is the stability. Once it’s running, it’s set and forget. Plus with separate backend someone’s switch firmware update or config isn’t going to mess up my storage. This also ties in to security.  In it’s most basic form, FC is very simple. Just create a zone on a switch and add the devices. You can even direct connect a server into a SAN skipping a switch\zone altogether. I’ve had nothing but amazing performance using FC. 

I agree that people who don’t use FC as often would that be more familiar with ISCSI as you are already used to working with ethernet and IP based devices. If we are talking extremely large FC or ISCI networks, both can get as complex as you want. 

 


MatzeB
Forum|alt.badge.img+5
  • Veeam Vanguard
  • 75 comments
  • April 2, 2025

Thanks for this feedbacks. Yes iSCSI is not that bad in general. But from different views (needs network bandwith) - okay you can use dedicated ports. From Security - yes you can use direct attached or dedicated vlan - but from what i see customers often don’t etc.

Me feeling simply was SAS and FC ist the most robust solution there. If you need crazy speeds like ​@Scott okay fair - than maybe 32GB FC is to slow - for 99% of the customers it isn’t :)

 

 


coolsport00
Forum|alt.badge.img+20
  • Veeam Legend
  • 4190 comments
  • April 2, 2025

My take, in general, is FC brings more (needless) complexity to the storage equation..when iSCSI can give similar performance with less complexity. That’s all. But that’s just my opinion. 😉

Regardless..really enjoyed your preso Matthias. Looking fwd to the follow up! 👍🏻


Scott
Forum|alt.badge.img+9
  • Veeam Legend
  • 1012 comments
  • April 2, 2025

I agree with both of you. FC is easier for me, and iSCSI would be easier for someone who doesn’t have much experience with FC.  Mainly because my environment while large, isn’t overly complex. FC has the ability to get EXTREAMLY complex, but so can networking. 

I was just implying that once you work with FC a bit, and keep the complexity down, it’s actually very simple and rock solid. 

I have had significantly less performance issues with fiber as well, but that could be specific to me. I have a few things that run iSCSI, but not as much in our prod environment. It’s handy when I want I need to add a 50TB volume to my workstation from a SAN though lol. That’s another story😂

32Gb FC has been pretty good for us as I’ve upgraded from 8 and 16 for many devices. with 2 25Gb network ports and 2 32Gb fiber ports, I rarely have issues. You could even go 4x4 for 100Gbps in servers. 

64Gb / 128Gb might be a potential during a future switch evergreen just to future proof but 32Gb many people are still at 8/16.  

What will drive this change world wide is when end user devices (workstations) start utilizing 2.5, 5 or 10Gb network cards more frequently and amount of devices on the network. We have some issues where users want to download multi TB files taking a long time. I can’t just give everyone 10Gb cards and say have fun or my servers will be overloaded. The Storage itself is rarely an issue these days.