Skip to main content

Fast storage and configuration.


Scott
Forum|alt.badge.img+8
  • Veeam Legend
  • 993 comments

I connected a new SAN and 2 servers that will be used for Veeam. These are just some “Hero numbers” of reads and intend to do a writeup of design with proper benchmarks in the future. 

 

When you have an environment with a large quantity of data, you still need to be able to restore it quickly or run instant restores while you migrate to production.  

 

For a preliminary test I connected 2 servers to a single SAN and created a few volumes to run IOMeter.

 

 

23.9k MBPS = 23.9GBps =191.2 Gbps.  with sub millisecond latency @ 240k IOPS I am very happy.

 

There is always a bottle neck to account for in design. Usually it’s budget 🤣.  Currently my limitation is the amount of of servers/ports at this point. Specifically 32Gb fiber ports as they are all running at 100%. If I had another server or more ports available on the repositories, I have no doubt this thing would be hitting close to 36GB/s. 

 

Storage snapshots from production along with fast networking from proxies to repositories is critical when designing something like this also. This will require 2 proxies with 4x 25Gb ports to utilize it at this level. Another thing to account for is the production storage. If your production storage is at 32Gb/s fiber, you need to utilize 8 fiber ports just to zone in your proxies. In many cases this is impossible unless you have multiple SAN’s hosting your production environment. 

 

Taking time to plan out how much data, how fast, and proper testing along the way can produce great results. It can save you from over spending or not achieving the backup speeds you were looking for. Also accounting for what you have in regards to networking, production storage, and other factors are extremely important. This should be able to move about 2PB in a 24 hour period, but could do more with additional ports. With enough ports, between 3-4 PB would be a reasonable target. Another reason to not allocate your entire SAN to the repo server if you bought it with room to grow. You could always add an additional repository server or more ports in the future.  Once the storage is configured, keep adding additional proxies as you evergreen your servers and watch Veeam beat the pants off your production environment. 😀

 

I’ll make a more significant blog post when time permits.  

 

 

 

10 comments

coolsport00
Forum|alt.badge.img+20
  • Veeam Legend
  • 4109 comments
  • October 23, 2024

I’d be happy with those numbers too! 😏 I’ll be looking for the writeup further down the road Scott….


Chris.Childerhose
Forum|alt.badge.img+21
  • Veeam Legend, Veeam Vanguard
  • 8401 comments
  • October 23, 2024

Some amazing numbers there Scott. 👍


HunterLAFR
Forum|alt.badge.img+8
  • Veeam Legend
  • 421 comments
  • October 24, 2024

BUDGET! Like DNS, is always the main issue here!

Lovely numbers buddy!

See you next week!


Scott
Forum|alt.badge.img+8
  • Author
  • Veeam Legend
  • 993 comments
  • October 24, 2024
coolsport00 wrote:

I’d be happy with those numbers too! 😏 I’ll be looking for the writeup further down the road Scott….

I’m only going to provision about half of the storage for now and either expand it down the road, or add additional repos if we need the performance.  That’s why the math part of the networking, fiber, and capabilities of the storage is so important. In hindsight, I would have ordered larger servers to add additional FC and network connections to the repository servers. I can still add additional repos for the performance and lessen my fault domain a bit at least. 


Scott
Forum|alt.badge.img+8
  • Author
  • Veeam Legend
  • 993 comments
  • October 24, 2024
Chris.Childerhose wrote:

Some amazing numbers there Scott. 👍

Thanks Chris. In theory both of these units can do 50GB = 800Gbps. Together that could technically move 8PB in 24 hours.     I don’t have the production storage, networking or servers to make that happen yet though. 🤣 I should be good for a few years at least on the Veeam side. Going from spinning rust to flash is quite the jump!


Scott
Forum|alt.badge.img+8
  • Author
  • Veeam Legend
  • 993 comments
  • October 24, 2024
HunterLAFR wrote:

BUDGET! Like DNS, is always the main issue here!

Lovely numbers buddy!

See you next week!

Can’t wait! It’s been too long my friend. 


Chris.Childerhose
Forum|alt.badge.img+21
  • Veeam Legend, Veeam Vanguard
  • 8401 comments
  • October 24, 2024
Scott wrote:
Chris.Childerhose wrote:

Some amazing numbers there Scott. 👍

Thanks Chris. In theory both of these units can do 50GB = 800Gbps. Together that could technically move 8PB in 24 hours.     I don’t have the production storage, networking or servers to make that happen yet though. 🤣 I should be good for a few years at least on the Veeam side. Going from spinning rust to flash is quite the jump!

Absolutely that is a big jump and nice to see.  Eventually we all get there with technology.  😎


JSeeger
Forum|alt.badge.img+3
  • Veeam Vanguard
  • 11 comments
  • November 8, 2024

I like me some good performance Numbers :-)


waqasali
Forum|alt.badge.img+3
  • Influencer
  • 196 comments
  • November 8, 2024

Hi @Scott This is such an important discussion to have. Thanks for starting it. 😊


dloseke
Forum|alt.badge.img+7
  • On the path to Greatness
  • 1447 comments
  • November 26, 2024

This is amazing.  And yes...budget….I hadn’t considered that to be the bottleneck, but that 100% accurate.


Comment