I would like to check with you guys if this performance is solid. I first created everything with WAN Accelerators that are physical machines with SSD, one for source and one for target side but it caused some problems and its not so convenient for me.
Now I configured everything with Direct mode and I would like to see if this is a solid performance or not. Our connection on both locations are 1Gbps, around 750-850 Mbps upload/download speed.
So this is from Primary Office to Secondary Office via VPN, 2 different countries.
I have 3 BCJ, 1 is for 2 biggest VMs, and others are smaller ones. So the biggest VM is now left, all of the others are done but the Processing Rate is not so much to be honest, no other processes are running in the background.
Page 1 / 1
Hello Nemanja.
Your performance seems to be in the correct order of magnitude with your line stats.
What exactly do you mean by 1Gbps but only around 750 Mbps? So your line does only deliver ¾ of the nominal speed? Can you make sure that no other tasks are using the line while the backup is BCJ is running?
Your edging at 43 MB/s which is around 50% of the expected maximum.
There is always quite some overhead in the stack. So I’d expect no more than ~70 MB/s. You never see the theoretical maximum here.
Did you try WAN-Acc in high-bandwidth mode? From 1G upwards the caching of low-bandwidth mode tends to be less efficient. I would expect an improvement here. HB-Mode is even less resource intensive as you don’t need any SSD caching. It’s all about CPU.
Hello Nemanja.
Your performance seems to be in the correct order of magnitude with your line stats.
What exactly do you mean by 1Gbps but only around 750 Mbps? So your line does only deliver ¾ of the nominal speed? Can you make sure that no other tasks are using the line while the backup is BCJ is running?
Your edging at 43 MB/s which is around 50% of the expected maximum.
There is always quite some overhead in the stack. So I’d expect no more than ~70 MB/s. You never see the theoretical maximum here.
Did you try WAN-Acc in high-bandwidth mode? From 1G upwards the caching of low-bandwidth mode tends to be less efficient. I would expect an improvement here. HB-Mode is even less resource intensive as you don’t need any SSD caching. It’s all about CPU.
Hello @Michael Melter
Thank you for response.
It's a nearly gigabit connection, up to 850 Mbps on both location.
Those WAN links are nearly all the time in use since that is our production network also. Users are connecting over WAN to our servers to work on projects, upload and sync projects to the Servers (We are an engineering company)
I tried WAN accelerators in high bandwidth mode but as I mention I got some errors with it and Veeam support didnt provide some help to be honest
They even suggest that at this download/upload rates I dont need WAN accelerators at all
WAN-Acc in HB mode does extra compression and does make use of digest files to rule out non-changed blockes. I can even be useful up to 10GBit/s if your hardware is powerful enough. Maybe support was relating to LB mode here. It’s hard to estimate the impact for your situation, because it is highly dependent on your data. But I would give it a try. Could give you some 50% more speed if you’re lucky.
LB mode does not seem worthwhile trying, as there is to much overhead here to justify using it with 100 Mbit/s and above.
Did you set the compression in BJC to high or extreme already? Could also be worthwhile trying. If you leave it to automatic, you will just transfer source backup data “as is”. Assumingly it is compressed as “optimal” only.
You might also consider seeding the backup by transporting the VBK manually (e.g. ext-HDD) and mapping the BCJ afterwards. Then you would not need to transfer the full but only the increments afterwards.
WAN-Acc in HB mode does extra compression and does make use of digest files to rule out non-changed blockes. I can even be useful up to 10GBit/s if your hardware is powerful enough. Maybe support was relating to LB mode here. It’s hard to estimate the impact for your situation, because it is highly dependent on your data. But I would give it a try. Could give you some 50% more speed if you’re lucky.
LB mode does not seem worthwhile trying, as there is to much overhead here to justify using it with 100 Mbit/s and above.
Did you set the compression in BJC to high or extreme already? Could also be worthwhile trying. If you leave it to automatic, you will just transfer source backup data “as is”. Assumingly it is compressed as “optimal” only.
You might also consider seeding the backup by transporting the VBK manually (e.g. ext-HDD) and mapping the BCJ afterwards. Then you would not need to transfer the full but only the increments afterwards.
I already tried it and got errors for it. This was the error I was getting throughout the whole backup copy job process. And at the end source WAN accelerator just gave up, it was not up anymore.
It causes some troubles and its not so perfect to have it in that way. I need it to be reliable since this is our DR site.
I created a new BCJ with use of WAN Accelerators in High Bandwidth mode. No pre populate cache, or seeding. This is the performance:
It’s no difference than using direct mode...
1.3 vs 1.2. Not a big difference in compression. Up to the source data, as I said.
But according to the dialog, you saturate your network to 98%. So it’s maxed out already.
Seems about right to me. After the initial seed, I’m assuming any incremental copies will be smaller and take less time anyway, so to me it looks okay.