Question

optimise the backup performance in SMB shares


Userlevel 3

Hi Guys,

Any best practice while doing backup of Veeam Agent for Windows on SMB shares? so that we get better performance?

We are using shared folder option in Veeam agent due to fact that we do not have server in that location.Around 4 to 5 users in that location and does not make sense to have dedicated server.  Only device present in the location is Synology NAS. Takes around 45 min to complete backup.Any possible option to optimise the performance ?


18 comments

Userlevel 7
Badge +17

Hi @Govinda - there really are no performance configs within VAW that I'm aware of besides maybe throttling setting. See here from the Guide on how to configure.

I know you say you think a server is overkill, but you could install Veeam Community Ed and perform a file share backup that way to see if you get better performance. 

Userlevel 3

Hi @Govinda - there really are no performance configs within VAW that I'm aware of besides maybe throttling setting. See here from the Guide on how to configure.

I know you say you think a server is overkill, but you could install Veeam Community Ed and perform a file share backup that way to see if you get better performance. 

 

In absence of physical server its impossible to install Veeam Community Edition at least in my understanding.  Since no option available in VAW side then i need to check in Synology side, if they can tweak some settings to optimise performance. I have done some search and found some settings and it really did not made any difference.  Right now, I have VAW and Synology NAS. Backup is done over SMB. Simple is that. 

Userlevel 7
Badge +8

Hi

I can give you some personal advice here, just guessing and as ideas.

From the agent perspective:

Run the Backup task when the PC is not heavily used.
Schedule the tasks not running all 5 at the same time.
SSD and a Gigabit Ethernet.

 

From the Synology:

SSD Disks, even destiny or CACHE.
Gigabit connectivity, if bonded, better.
Don’t schedule other tasks in the backup window, like other shares, copies, etc.

Depending on your Synology Model, the Memory and CPU will be also a bottleneck.

cheers.
 

Userlevel 3

Hi

I can give you some personal advice here, just guessing and as ideas.

From the agent perspective:

Run the Backup task when the PC is not heavily used.
Schedule the tasks not running all 5 at the same time.
SSD and a Gigabit Ethernet.

 

From the Synology:

SSD Disks, even destiny or CACHE.
Gigabit connectivity, if bonded, better.
Don’t schedule other tasks in the backup window, like other shares, copies, etc.

Depending on your Synology Model, the Memory and CPU will be also a bottleneck.

cheers.
 

yes, from Agent Perspective, I have done all.

 

from Synology, 2nd and 3rd point i have done. first and last one, i will try to implement and see how much different it makes.  Thanks for your suggestion. I have tried, cloud connect service but again backup works well when we have good upload speed or else it continuously fails. Again one more problem to deal with and for this problem no solution as such. 

Userlevel 7
Badge +17

Hi @Govinda - there really are no performance configs within VAW that I'm aware of besides maybe throttling setting. See here from the Guide on how to configure.

I know you say you think a server is overkill, but you could install Veeam Community Ed and perform a file share backup that way to see if you get better performance. 

 

In absence of physical server its impossible to install Veeam Community Edition at least in my understanding.  Since no option available in VAW side then i need to check in Synology side, if they can tweak some settings to optimise performance. I have done some search and found some settings and it really did not made any difference.  Right now, I have VAW and Synology NAS. Backup is done over SMB. Simple is that. 

Ah ok...understood. I would suggest looking at installing the Veeam CBT driver instead of using the native VAW one as noted in the Guide, but it appears to only be useful for backing up large DBs. 

Userlevel 3

Hi @Govinda - there really are no performance configs within VAW that I'm aware of besides maybe throttling setting. See here from the Guide on how to configure.

I know you say you think a server is overkill, but you could install Veeam Community Ed and perform a file share backup that way to see if you get better performance. 

 

In absence of physical server its impossible to install Veeam Community Edition at least in my understanding.  Since no option available in VAW side then i need to check in Synology side, if they can tweak some settings to optimise performance. I have done some search and found some settings and it really did not made any difference.  Right now, I have VAW and Synology NAS. Backup is done over SMB. Simple is that. 

Ah ok...understood. I would suggest looking at installing the Veeam CBT driver instead of using the native VAW one as noted in the Guide, but it appears to only be useful for backing up large DBs. 

yes, thats in place. We are using latest version of VAW with CBT Driver

Userlevel 7
Badge +20

Hi

I can give you some personal advice here, just guessing and as ideas.

From the agent perspective:

Run the Backup task when the PC is not heavily used.
Schedule the tasks not running all 5 at the same time.
SSD and a Gigabit Ethernet.

 

From the Synology:

SSD Disks, even destiny or CACHE.
Gigabit connectivity, if bonded, better.
Don’t schedule other tasks in the backup window, like other shares, copies, etc.

Depending on your Synology Model, the Memory and CPU will be also a bottleneck.

cheers.
 

These are some good tips here.  I use a Synology NAS for my backups, and it has 2 x 1GB NICs that are bonded on it so I can get best throughput during backups.  My other Synology has 10GB NIC but that is my VMware VM one with iSCSI datastores.

The bottleneck is going to be a combination of the network and the NAS itself.  So, backups outside of busy times is the best thing if you can do that.  Depending on the Synology like mentioned if you have SSDs in it will be faster but then CPU/RAM are limiting factors too.

HTH

Userlevel 7
Badge +6

My only recommendation if possible would be to not use SMB and change over to using ISCSI so that you can take advantage of the Multipathing aspect of the protocol.  My understanding is that even NFS has better performance than SMB.  Regardless, there’s not a lot to be tweaked with SMB.

Userlevel 3

My only recommendation if possible would be to not use SMB and change over to using ISCSI so that you can take advantage of the Multipathing aspect of the protocol.  My understanding is that even NFS has better performance than SMB.  Regardless, there’s not a lot to be tweaked with SMB.

I do agree with you on NFS v/s SMB performance. Do you think its good idea to connect iSCSI drive to Endpoint directly. Unlike server, Endpoint( laptop/computer) restart almost every days and that will not lead to data corruption? Sorry never dare to try this. I am using ReFS with iSCSI drive attached to windows server 2016\2019 and performance is superb. Unfortunately for smaller location, i do not have server its not possible to have server due to budget constraint and only fewer number of people working in that location.

Userlevel 7
Badge +8

I would suggest to keep it “as simple as possible”
If the workstations are located in the same network as the Synology NAS, 
The backup timing / schedule is setup to avoid overlapping, 
I will try to upgrade the Synology for better performance (SSD, Caché, Network if doable, etc.
optimize the Veeam Agent backup (guessing) to daily incremental + weekly Full, and protect those SMB shares to not be visible over the network, no mapped anywhere, just dedicated for the backup.

Sometimes the simplest, the better, they are only 5 machines.

If the time comes that you need extra performace, grow the number of workstations, etc,
investing in a Veeam B&R Server will be worth it!

(fully my personal opinion based on what I read from your posts).

cheers.

Userlevel 7
Badge +20

As they say use the “KISS” principle - Keep it simple stupid.  If you can back up to the server with iSCSI that is going to be the best method.  Otherwise, you will need to tweak the other systems that send directly to the NAS.  It will take time but, in the end, you will find the right combination of settings to optimize it as best you can.

Userlevel 3

As they say use the “KISS” principle - Keep it simple stupid.  If you can back up to the server with iSCSI that is going to be the best method.  Otherwise, you will need to tweak the other systems that send directly to the NAS.  It will take time but, in the end, you will find the right combination of settings to optimize it as best you can.

Yes.. Agree with you.

Userlevel 7
Badge +6

I would suggest to keep it “as simple as possible”
If the workstations are located in the same network as the Synology NAS, 
The backup timing / schedule is setup to avoid overlapping, 
I will try to upgrade the Synology for better performance (SSD, Caché, Network if doable, etc.
optimize the Veeam Agent backup (guessing) to daily incremental + weekly Full, and protect those SMB shares to not be visible over the network, no mapped anywhere, just dedicated for the backup.

Sometimes the simplest, the better, they are only 5 machines.

If the time comes that you need extra performace, grow the number of workstations, etc,
investing in a Veeam B&R Server will be worth it!

(fully my personal opinion based on what I read from your posts).

cheers.

I’ve found that for backups, the SSD cache doesn’t really do much for performance.  Perhaps in larger environments it could, but for my SMB client base, I think the money is better spent on 10Gb networking.

Userlevel 7
Badge +6

My only recommendation if possible would be to not use SMB and change over to using ISCSI so that you can take advantage of the Multipathing aspect of the protocol.  My understanding is that even NFS has better performance than SMB.  Regardless, there’s not a lot to be tweaked with SMB.

I do agree with you on NFS v/s SMB performance. Do you think its good idea to connect iSCSI drive to Endpoint directly. Unlike server, Endpoint( laptop/computer) restart almost every days and that will not lead to data corruption? Sorry never dare to try this. I am using ReFS with iSCSI drive attached to windows server 2016\2019 and performance is superb. Unfortunately for smaller location, i do not have server its not possible to have server due to budget constraint and only fewer number of people working in that location.

I haven’t done direct-connect ISCSI.  I run through a switch, or two in most cases for redundancy.  However, I’ve mostly stopped using NAS for storage and instead use a small, purpose-built server.  In most cases, it’s going to be a Dell R540 for larger environments, or R440 or R340 for smaller environments - I know 14th Gen is out the door, so equivalents of 15th or 16th Gen is now where I’m at.  I also use towers as well depending on the client needs.  The T640 was awesome because of the number of disks it could hold.  Whatever it is, it needs to have a regular PERC so that I have a battery backed cache, and then I have enterprise support agreements for replacing failed disks, etc.  Add’s a lot more reliability and redundancy vs a prosumer NAS with a couple prosumer drives using software RAID with no cache and no redundancy aside from the drives.

With my smaller clients, I’m beginning to move to Cloud Connect backups and want to put a NAS onsite for a local copy, but keep most of the data in Wasabi.  That said, if I need a small server on-premise and use regular VBR deployments which are still my preference, I’m looking at using something more along the lines of an optiplex or precision workstation that may or may not have disk redundancy to run things and still put a copy in Wasabi.  I’ve been loosely referencing Datto and Barracuda and what they do with their turn-key appliances as far as spec’s and sizing in my design, but I haven’t settled on anything just yet.

Userlevel 3

My only recommendation if possible would be to not use SMB and change over to using ISCSI so that you can take advantage of the Multipathing aspect of the protocol.  My understanding is that even NFS has better performance than SMB.  Regardless, there’s not a lot to be tweaked with SMB.

I do agree with you on NFS v/s SMB performance. Do you think its good idea to connect iSCSI drive to Endpoint directly. Unlike server, Endpoint( laptop/computer) restart almost every days and that will not lead to data corruption? Sorry never dare to try this. I am using ReFS with iSCSI drive attached to windows server 2016\2019 and performance is superb. Unfortunately for smaller location, i do not have server its not possible to have server due to budget constraint and only fewer number of people working in that location.

I haven’t done direct-connect ISCSI.  I run through a switch, or two in most cases for redundancy.  However, I’ve mostly stopped using NAS for storage and instead use a small, purpose-built server.  In most cases, it’s going to be a Dell R540 for larger environments, or R440 or R340 for smaller environments - I know 14th Gen is out the door, so equivalents of 15th or 16th Gen is now where I’m at.  I also use towers as well depending on the client needs.  The T640 was awesome because of the number of disks it could hold.  Whatever it is, it needs to have a regular PERC so that I have a battery backed cache, and then I have enterprise support agreements for replacing failed disks, etc.  Add’s a lot more reliability and redundancy vs a prosumer NAS with a couple prosumer drives using software RAID with no cache and no redundancy aside from the drives.

With my smaller clients, I’m beginning to move to Cloud Connect backups and want to put a NAS onsite for a local copy, but keep most of the data in Wasabi.  That said, if I need a small server on-premise and use regular VBR deployments which are still my preference, I’m looking at using something more along the lines of an optiplex or precision workstation that may or may not have disk redundancy to run things and still put a copy in Wasabi.  I’ve been loosely referencing Datto and Barracuda and what they do with their turn-key appliances as far as spec’s and sizing in my design, but I haven’t settled on anything just yet.

How is your experience in Wasabi? I used cloud connect service provider to backup directly from VAW to cloud connect repository  and felt that bandwidth is restricted to Mbps by cloud connect provider. If we go only with file level backup then again it will slower. If we do entire laptop then data backup will be somewhere close to 80GB alteast for full backup. Getting Initial backup is always challenging part. Increment  are also around 6 to 7GB per day alteast. If you are lucky and user is connected to good upload speed then chances of backup done successful is very high if not then again backup will fail continuously. 

Userlevel 7
Badge +6

Use multiple Gateway Servers , and select them all to enhance the performance

Userlevel 7
Badge +20

Use multiple Gateway Servers , and select them all to enhance the performance

This technically will not help as a client connects to a GW and then stays there unless there is a disconnect.  It will give multiple GWs to connect to but not so much performance enhancement.

Userlevel 7
Badge +8

I’d find the bottleneck but at 1Gb your speeds are going to be a bit lacking compared to 10Gb.

 

I find in most environments 2 things being the bottleneck if things are sized even close to correctly.

Network and Disk.

Spinning disk and high iops can create some pretty high latency.  This will be more apparent in merges, instant restores, and synthetic operations. With enough spinning disk, or large cache you don’t notice it as much.

Network is often the bottleneck if you don’t have a bunch of proxies as Veeam will push the data as hard as it can. If you have fast storage on both sides, and enough proxies, you will fill the NW pipe quick.

I can flood our 10Gbe connection between sites running Veeam copy jobs :)  The throttles are important for things like that. 

Comment