Solved

VEEAM is up and running in my environment - any advice on how to speed up my backups?


Userlevel 5
Badge

Hey VEEA-ers,

 

I finally got VEEAM installed and running and configured and I am starting to kick off jobs, but my speeds are slow (100mbps) or when writing

 

I have a full 10gb backbone and a dedicated NIMBLE specifically for my VEEAM data and I have a physical server with 4x 10gb NICs on it (2 for management and 2 for data)

 

any ideas as to why it could be slow? I am doing some troubleshooting, but was curious if anyone had any thoughts before I went down a rabbit hole

 

thanks guys

icon

Best answer by coolsport00 22 August 2023, 20:08

View original

39 comments

Userlevel 7
Badge +8

It’s trying to use DirectSAN, failing, then failing over to VMware Backup Proxy.

Now, depending on how “VMware Backup Proxy” is configured, that is your issue. 

For me, “VMware Backup Proxy” in this case would be a VM, if it was on the same HOST as the VM being backed up, it would use [HotAdd] and be quite fast. It essentualy mounts the volume then spits it out the 10Gb  port in the VMware host.

This is why multiple VM proxies (1 per host are recomended) are needed to go this route. If it isn’t on the same host it uses NBD.

It will also be using the VMware Management ports, with overhead in this case.

 

If you have 1 Gb management ports, it’s slow….. that speed is within the 1Gb range. 1Gbps = roughly 125MB (Theoretical max) 

 

10Gb = 1200MB so you will see around 1GB in Veeam.

 

Check your transport modes, make sure your proxy that you are using and VMware management ports are 10Gb networking.

 

What I do is thin provision everything on the SAN itself to save space, but create my VM’s thick provisioned. It’s a double bonus that they don’t expand and blow up the datastore too. 

 

Userlevel 5
Badge

I am talking with someone now about it all

 

You guys have been a terrific help and always are here to help me out with things

 

thanks for all of the advice, things are moving much better now

 

thanks again guys

Userlevel 7
Badge +20

I would not think an auto logoff GPO would factor in to this. As noted you are using Thin provisioning and based on the previous post is part of the issue.  Maybe this requires a support case to further investigate.

Userlevel 5
Badge

I dont know what happened exactly, but I went from 1gbps read and 950mbps write speeds to 80mbps write or even lowere

any idea why that might have happened?

 

My job also says “DirectSAN is not available failing over” but I have failover turned off

 

 

 

Hi @clownyboots178 

do proxies and vbr have access to proxies\datastores?

Have you set proxy-side direct storage access mode?

Backup Proxies -> VMWare Backup Proxy -> Transport Mode

and in the job check this.

ATTENTION.

Only the VM disks "Thick Provision Lazy Zeroed" and "ThickProvision Eager Zeroed" can be used in Direct SAN mode.

check this:

Direct SAN Access - User Guide for VMware vSphere (veeam.com)

 

both of those options have been checked for some time, and we found out that only THICK drives will work with DirectSAN, we are all THIN, so we are not actually using it I suppose, but since we have the LUNs connected to this machine, our speeds were amazing and the indicate “(san)” within the job

something has to be happening on the network or with the server being that this happens every day on every job - we have a GPO auto logging off everyone, could that cause an issue?

 

 

Userlevel 7
Badge +9

I dont know what happened exactly, but I went from 1gbps read and 950mbps write speeds to 80mbps write or even lowere

any idea why that might have happened?

 

My job also says “DirectSAN is not available failing over” but I have failover turned off

 

 

 

Hi @clownyboots178 

do proxies and vbr have access to proxies\datastores?

Have you set proxy-side direct storage access mode?

Backup Proxies -> VMWare Backup Proxy -> Transport Mode

and in the job check this.

ATTENTION.

Only the VM disks "Thick Provision Lazy Zeroed" and "ThickProvision Eager Zeroed" can be used in Direct SAN mode.

check this:

Direct SAN Access - User Guide for VMware vSphere (veeam.com)

 

Userlevel 7
Badge +20

Hi, I notice your screenshot says “Using backup proxy VMware Backup Proxy”, this is the default proxy deployed when you install Veeam. Is this intentional that you’re using this server as a proxy? Or do you have Veeam as a virtual machine for example, and then a physical server as your proxy & repository?

This is intentional, we have a dedicated Dell box that has way more horsepower than it needs and we did that so it could be the proxy and handle anything we throw at it

 

No network changes as this happened every night this week between the same time frame with all other jobs

 

what should I look for in the logs?

 

So this Dell box is the VBR server + Proxy correct?

Userlevel 7
Badge +20

How is the storage you are backing up from connected to the environment?  iSCSI or FC?  Might want to look there since it failed over to Network mode.  There is a connection to the storage preventing the Direct SAN access.

Userlevel 5
Badge

Hi, I notice your screenshot says “Using backup proxy VMware Backup Proxy”, this is the default proxy deployed when you install Veeam. Is this intentional that you’re using this server as a proxy? Or do you have Veeam as a virtual machine for example, and then a physical server as your proxy & repository?

This is intentional, we have a dedicated Dell box that has way more horsepower than it needs and we did that so it could be the proxy and handle anything we throw at it

 

No network changes as this happened every night this week between the same time frame with all other jobs

 

what should I look for in the logs?

 

Userlevel 7
Badge +20

Hi, I notice your screenshot says “Using backup proxy VMware Backup Proxy”, this is the default proxy deployed when you install Veeam. Is this intentional that you’re using this server as a proxy? Or do you have Veeam as a virtual machine for example, and then a physical server as your proxy & repository?

Userlevel 7
Badge +20

Are there any recent changes to your network or any failures on switches or something?  You can also check the logs to get more details - C:\ProgramData\Veeam\Backup

Userlevel 5
Badge

I dont know what happened exactly, but I went from 1gbps read and 950mbps write speeds to 80mbps write or even lowere

any idea why that might have happened?

 

My job also says “DirectSAN is not available failing over” but I have failover turned off

 

 

Userlevel 7
Badge +8

i hit 3GB at peak, I’ll hit 2GB often, and 1GB is common, however, depending on how much data is running per job, and concurrency i see as low as 500MB. 

 

As stated WAN accelerators only help for shipping data offsite, and if you have a 10Gbps connection YMMV. I don’t use one and send a bunch of data to other locations without issue. 

Userlevel 7
Badge +20

Direct SAN is enabled and rocking and rolling 

we are reading and writing at 1gbps or more and it’s glorious

 

dumb question : anyone ever seen faster speeds?

if so, how?

 

thanks for all of the help and advice guys 

 

 

There was a ‘beat the Gostev’ challenge that Veeam ran, which saw a couple of submissions of over 100GBps. The winner was 147.7GBps, so Veeam does scale VERY well.

 

Make sure you’ve got Multipath IO enabled so that you can write to your Nimble via multiple channels. Are your data interfaces shared for access to Nimble and your production storage?

 

A diagram of your topology would be a good place to identify potential bottlenecks. There’ll always be a bottleneck but it’s got to be fast enough to meet your needs. I just did a deployment that was seeing 4-5GBps throughput per site, it was with a couple of HPE Apollos per site, with multiple RAID Controllers and RAID volumes per HPE Apollo, with 40GbE and 32Gbps Fibre Channel to read from production. The bottleneck is source! 😁

Userlevel 7
Badge +17

"dumb question : anyone ever seen faster speeds?"

Yes..up to 3GB in my environment. Doesn't happen often. I use backup from storage snapshot, similar to DirectSAN, except the backup is taken from a snap of my Nimble Volume. But most of the time my speeds are only around 600MB to about 1GB.

Glad things are working well. 

Userlevel 7
Badge +20

Direct SAN is enabled and rocking and rolling 

we are reading and writing at 1gbps or more and it’s glorious

 

dumb question : anyone ever seen faster speeds?

if so, how?

 

thanks for all of the help and advice guys 

 

 

Glad to hear it. Be sure to mark the best answer.

Userlevel 5
Badge

Direct SAN is enabled and rocking and rolling 

we are reading and writing at 1gbps or more and it’s glorious

 

dumb question : anyone ever seen faster speeds?

if so, how?

 

thanks for all of the help and advice guys 

 

 

Userlevel 7
Badge +17

WAN Accelerators are generally used for offsite jobs; even if you run backups to remote site, but are on a layer 2 network, there’s still no need for them. Just make sure to have Proxies at the source & target ends of the backup path for best performance

Userlevel 7
Badge +20

we are working on setting up directSAN now and hope to have it running today, the biggest concern is PURE wants thick provisioning to be enabled on drives, when that is hardly used, so its taking a bit longer to digest how it all works in our environment

any use in setting up WAN Accelerators?

If everything is local WAN accelerators won't help. Check here for information - https://helpcenter.veeam.com/docs/backup/vsphere/wan_accelerator.html?ver=120

 

Userlevel 5
Badge

we are working on setting up directSAN now and hope to have it running today, the biggest concern is PURE wants thick provisioning to be enabled on drives, when that is hardly used, so its taking a bit longer to digest how it all works in our environment

any use in setting up WAN Accelerators?

Userlevel 7
Badge +8

I use ReFS formatted at 64k as per best practice.

Direct SAN is preferred method if you have the license. 

I’m usually hitting about 2GB transfer speeds. I’m limited by my target SAN most of the time that is about to be replaced 

 

If the source is your bottleneck, what are your VMware hosts management speeds? what transport method is it working? I had1Gbps management on some of my older hosts and the speeds were awful if I was using the wrong transport mode. 

 

If your not using DirectSAN you want to use Hotadd and have a proxy on every host if you can. 

 

Userlevel 7
Badge +20

“I also changed he concurrent tasks from 4 at one time to 112 and it is flying now” < This! You want to keep concurrent tasks set as high as your resources can handle. The only latency-type config I would make is to your storage latency. In your VBR menu > General Options > I/O tab, I would config your “stop assigning new tasks” option to no more than 30ms; I have 20ms, but sometimes 30ms is ok. Otherwise you’ll tend to notice latency within your VMs, or the apps your VMs run.

This is definitely important for sure with storage integration.  Take this advice for sure as Shane knows.

Userlevel 7
Badge +17

‘Bottleneck on every job is “source”;
As far as the ‘bottleneck’ stats...you’ll never have zero bottleneck. Those stats are used as a gauge to help you troubleshoot when you do run into backup performance issues. Gives you place to start looking, in other words.

Userlevel 7
Badge +17

So basically, Veeam needs to read the data from the Datastore VMs reside (snapshotting, etc). As such, I/O control can be configured to not allow Veeam unfetted I/O to read such data on your production datastores in which VMs reside.

Userlevel 7
Badge +17

Read bullet 3 here @clownyboots178 

Userlevel 5
Badge

“I also changed he concurrent tasks from 4 at one time to 112 and it is flying now” < This! You want to keep concurrent tasks set as high as your resources can handle. The only latency-type config I would make is to your storage latency. In your VBR menu > General Options > I/O tab, I would config your “stop assigning new tasks” option to no more than 30ms; I have 20ms, but sometimes 30ms is ok. Otherwise you’ll tend to notice latency within your VMs, or the apps your VMs run.

Isnt this only pertaining to backups that live on the same storage as production data?

Comment