What’s the fastest backup speed you’ve ever achieved with Veeam?
I was working with a system recently that wasn’t using any flash within the backup repositories, but beefy RAID controllers and multiple RAID 60’s, and saw a pretty impressive 3GBps throughput at a job level. Sure it’s not NVMe speeds, but it’s certainly no slouch!
And I can’t talk about this subject without bringing up my favourite ever UI Bug within Veeam, when I ‘achieved’ a speed of ~429.49PB/s
On your marks… get set… post!
Page 1 / 1
This would be a nice speed….
Under normal circumstances I get something around 1,5 GB/sec… and less in some envronments.
What a speed
We have normally also around 1-2 GB/sec in our environments and some faster/slower.
Using Nimble and BfSS, I’ve reached 3.9GB Read speed. I can generally reach over 1.2GB every other day or so; and, not consistently, but one run a day (this particular job runs every 30min during the workday). Not bad I thought!
Cheers!
We should probably do an updated “Beat the Gostev” in 2020 or such we kicked off this fun project, I was also thinking a “fastest restore” type of context. Remember this @JSeeger ?
That would actually be cool @Rick Vanover ...a RESTORE speed contest thing. I mean...when folks are in need of getting back online, that’s the real need, right?...not necessarily how fast one can backup. ♂️
@JSeeger - show off! Well done!
Well don’t tag me then in the future
That’s great @JSeeger about speeds being that level in both directions. I remember the record-setting equipment was a temporary allocation.
We typically see 2-4GB/s on our backups and we are hoping to improve that with testing the Hitachi plugin for storage snapshot backups. Hopefully we can saturate the 10GB and FC network at that point.
1-2GB/s usually. Our replication between sites has to be throttled as it will overload the 10GB pipe :)
I’m running mostly 32GB fiber now, so once I get my new proxy servers and tape servers i’ll be able to really push limits.
My bottle neck is my Veeam SANS. all slow spinning disk. Waiting on some quotes of SSD/NVMe so I can play with the big kids :)
I am clearly doing something wrong compared to you guys. But I have a new server on order with local disks and will have 25Gb direct storage access from our PowerStore array so I should be able to move away from my proxy server. I expect my bottleneck to move to the destination. I’m not sure that this is my fastest speed, but it’s the fastest speed I’ve achieved in the past 24 hours. I’ll just be happy to have my backups off of the Synology NAS that is being targeted.
I did have some really fast backups running when I was doing some testing of different block sizes when using the PowerStore as both the source and destination. Also wasn’t idea as the destination was a virtual disk within the proxy/repository, but it was temporary for testing.
Mostly between 100MB/s and 500MB/s for most of the customers (SMB). For the more performant implementations we have 1 à 2 GB/s. For the best performance we have 4GB/s.
If I may revive this blog again,
Can anyone perhaps comment on the sustained speed (with at least 5TB of data) of a single 2-CPU server?
I have a solution that I am proposing with a VMware virutalized Veeam server, with a big standalone proxy server (2 x 32-Core, 384GB RAM, 4 x 25GbE) connected to a high speed HPE Alletra 5050 with 4 x 25GbE ports. I need to give an estimated throughput per hour with 1000 VMs (with multiple 25GbE links) to be backed up (250TB)
If I may revive this blog again,
Can anyone perhaps comment on the sustained speed (with at least 5TB of data) of a single 2-CPU server?
I have a solution that I am proposing with a VMware virutalized Veeam server, with a big standalone proxy server (2 x 32-Core, 384GB RAM, 4 x 25GbE) connected to a high speed HPE Alletra 5050 with 4 x 25GbE ports. I need to give an estimated throughput per hour with 1000 VMs (with multiple 25GbE links) to be backed up (250TB)
One thing to note, (2 x 32-Core, 384GB RAM, 4 x 25GbE) may have some limitations where proxies would work better, plus you get redundancy.
Also, you have 4 proxy NW ports, 4 SAN ports on the alletra, but you need the proxy to talk to production storage as well. Did you account for that?
Physical Veeam Proxy is a good call, and Virtual Veeam server is totally fine.
You may want to get 2 servers, with half the memory, even 2-20 core to save cost. Price will be pretty close and you can be redundant.
Virtual Veeam server is totally fine.
I’d revise that to say that virtual Veeam server is “acceptable”. From a security perspective, there’s some risks that seem to be growing with having the Veeam server virtualized, especially on the same hardware as production. I have a lot of virtualized VBR servers, but I prefer physical when I can.
Console glitched out a bit. 11 GB/s, but also 2GB/s.
Seeing that this is going to 6 LTO8 Tapes, and coming from some spinning rust, I’m happy with 2GB, but 11 just isn’t possible lol.
6 LTO8 tapes max speed is about 2160MB/s which is also 2.16GB/s. add encryption and the fact it is theoretical max, that’s not bad.
Pretty obvious there is only a few (40TB+) servers that keep this job running with 1 or 2 drives.
Splitting of VBK’s to tape would be so handy for things like that.
Console glitched out a bit. 11 GB/s, but also 2GB/s.
Seeing that this is going to 6 LTO8 Tapes, and coming from some spinning rust, I’m happy with 2GB, but 11 just isn’t possible lol.
6 LTO8 tapes max speed is about 2160MB/s which is also 2.16GB/s. add encryption and the fact it is theoretical max, that’s not bad.
That is pretty good for LTO8 for sure. Anything above 1GB I am always happy with.
Daaaannnggg! Nice speed!
Daaaannnggg! Nice speed!
lol. I was saying that 11GB is kindof impossible. 6 LTO tapes would be about 2GB which the graph shows
The don’t call him “Speedy J” for nothing ...just saying
The don’t call him “Speedy J” for nothing ...just saying