Hi,
You’ll be able to leverage the extra networking bandwidth only if you need it.
Your repositories will benefit if you’ve got enough proxies keep in them fed with data, which also means enough servers being protected with enough data to justify this. Also, consider the IO performance of your repository. Will your storage be able to saturate 100Gbps/12.5GBps of data? Can your production datastores keep up with such a read rate during backup and a write rate during restore.
If you’ve got some form of flash in your hardened repo then it makes a lot more sense to do that.
For the record we held a competition years ago to see who could backup a small dataset the fastest and speeds of over 100GBps were achieved so you absolutely can achieve this level of performance with Veeam.
Hi @imadam A single repo is a SPOF (Single Pint Of Failure) itself. so you need to garantee additional copy/copies away from the primary repo.
You also need to move away any SOPF on your repository, you need to have redundant componets, such as Power Supply, RAID Controler, RAID level 6 is recommendad, Hot Spare disks.
have a look at my blog posts regarding DAS repository, I started publishing thos articles 2 weeks ago, they will give you an orientantion.
For network, I think you need to understand the storage I/O before, you can benchmark your storage using those two tools.
http://www.iometer.org/
https://fio.readthedocs.io/en/latest/fio_doc.html#i-o-size
Hi,
from my point of view, repository from 2×25GbE to 4×25GbE is interesting ...
I have on repo 2×25GbE, 64 cpu so only 64 concurrent tasks and bottleneck is on datastore :)
even VBR and Proxies with 10GbE are directly on vSAN and network is not limited
I see on jobs 5GBps, what is around 40Gbps, so still enough for 2×25GbE as jobs are done in average in 2 hours
So it really depends, what performance of platform will offer you and how much data you have to backup within backup window
it could turn out, that backups will have impact on platform, that customers will ask for performance …