Skip to main content

Hey VEEA-ers,

 

I finally got VEEAM installed and running and configured and I am starting to kick off jobs, but my speeds are slow (100mbps) or when writing

 

I have a full 10gb backbone and a dedicated NIMBLE specifically for my VEEAM data and I have a physical server with 4x 10gb NICs on it (2 for management and 2 for data)

 

any ideas as to why it could be slow? I am doing some troubleshooting, but was curious if anyone had any thoughts before I went down a rabbit hole

 

thanks guys

"dumb question : anyone ever seen faster speeds?"

Yes..up to 3GB in my environment. Doesn't happen often. I use backup from storage snapshot, similar to DirectSAN, except the backup is taken from a snap of my Nimble Volume. But most of the time my speeds are only around 600MB to about 1GB.

Glad things are working well. 


Direct SAN is enabled and rocking and rolling 

we are reading and writing at 1gbps or more and it’s glorious

 

dumb question : anyone ever seen faster speeds?

if so, how?

 

thanks for all of the help and advice guys 

 

 

There was a ‘beat the Gostev’ challenge that Veeam ran, which saw a couple of submissions of over 100GBps. The winner was 147.7GBps, so Veeam does scale VERY well.

 

Make sure you’ve got Multipath IO enabled so that you can write to your Nimble via multiple channels. Are your data interfaces shared for access to Nimble and your production storage?

 

A diagram of your topology would be a good place to identify potential bottlenecks. There’ll always be a bottleneck but it’s got to be fast enough to meet your needs. I just did a deployment that was seeing 4-5GBps throughput per site, it was with a couple of HPE Apollos per site, with multiple RAID Controllers and RAID volumes per HPE Apollo, with 40GbE and 32Gbps Fibre Channel to read from production. The bottleneck is source! 😁


i hit 3GB at peak, I’ll hit 2GB often, and 1GB is common, however, depending on how much data is running per job, and concurrency i see as low as 500MB. 

 

As stated WAN accelerators only help for shipping data offsite, and if you have a 10Gbps connection YMMV. I don’t use one and send a bunch of data to other locations without issue. 


I dont know what happened exactly, but I went from 1gbps read and 950mbps write speeds to 80mbps write or even lowere

any idea why that might have happened?

 

My job also says “DirectSAN is not available failing over” but I have failover turned off

 

 


Are there any recent changes to your network or any failures on switches or something?  You can also check the logs to get more details - C:\ProgramData\Veeam\Backup


Hi, I notice your screenshot says “Using backup proxy VMware Backup Proxy”, this is the default proxy deployed when you install Veeam. Is this intentional that you’re using this server as a proxy? Or do you have Veeam as a virtual machine for example, and then a physical server as your proxy & repository?


Hi, I notice your screenshot says “Using backup proxy VMware Backup Proxy”, this is the default proxy deployed when you install Veeam. Is this intentional that you’re using this server as a proxy? Or do you have Veeam as a virtual machine for example, and then a physical server as your proxy & repository?

This is intentional, we have a dedicated Dell box that has way more horsepower than it needs and we did that so it could be the proxy and handle anything we throw at it

 

No network changes as this happened every night this week between the same time frame with all other jobs

 

what should I look for in the logs?

 


How is the storage you are backing up from connected to the environment?  iSCSI or FC?  Might want to look there since it failed over to Network mode.  There is a connection to the storage preventing the Direct SAN access.


Hi, I notice your screenshot says “Using backup proxy VMware Backup Proxy”, this is the default proxy deployed when you install Veeam. Is this intentional that you’re using this server as a proxy? Or do you have Veeam as a virtual machine for example, and then a physical server as your proxy & repository?

This is intentional, we have a dedicated Dell box that has way more horsepower than it needs and we did that so it could be the proxy and handle anything we throw at it

 

No network changes as this happened every night this week between the same time frame with all other jobs

 

what should I look for in the logs?

 

So this Dell box is the VBR server + Proxy correct?


I dont know what happened exactly, but I went from 1gbps read and 950mbps write speeds to 80mbps write or even lowere

any idea why that might have happened?

 

My job also says “DirectSAN is not available failing over” but I have failover turned off

 

 

 

Hi @clownyboots178 

do proxies and vbr have access to proxies\datastores?

Have you set proxy-side direct storage access mode?

Backup Proxies -> VMWare Backup Proxy -> Transport Mode

and in the job check this.

ATTENTION.

Only the VM disks "Thick Provision Lazy Zeroed" and "ThickProvision Eager Zeroed" can be used in Direct SAN mode.

check this:

Direct SAN Access - User Guide for VMware vSphere (veeam.com)

 


I dont know what happened exactly, but I went from 1gbps read and 950mbps write speeds to 80mbps write or even lowere

any idea why that might have happened?

 

My job also says “DirectSAN is not available failing over” but I have failover turned off

 

 

 

Hi @clownyboots178 

do proxies and vbr have access to proxies\datastores?

Have you set proxy-side direct storage access mode?

Backup Proxies -> VMWare Backup Proxy -> Transport Mode

and in the job check this.

ATTENTION.

Only the VM disks "Thick Provision Lazy Zeroed" and "ThickProvision Eager Zeroed" can be used in Direct SAN mode.

check this:

Direct SAN Access - User Guide for VMware vSphere (veeam.com)

 

both of those options have been checked for some time, and we found out that only THICK drives will work with DirectSAN, we are all THIN, so we are not actually using it I suppose, but since we have the LUNs connected to this machine, our speeds were amazing and the indicate “(san)” within the job

something has to be happening on the network or with the server being that this happens every day on every job - we have a GPO auto logging off everyone, could that cause an issue?

 

 


I would not think an auto logoff GPO would factor in to this. As noted you are using Thin provisioning and based on the previous post is part of the issue.  Maybe this requires a support case to further investigate.


I am talking with someone now about it all

 

You guys have been a terrific help and always are here to help me out with things

 

thanks for all of the advice, things are moving much better now

 

thanks again guys


It’s trying to use DirectSAN, failing, then failing over to VMware Backup Proxy.

Now, depending on how “VMware Backup Proxy” is configured, that is your issue. 

For me, “VMware Backup Proxy” in this case would be a VM, if it was on the same HOST as the VM being backed up, it would use HotAdd] and be quite fast. It essentualy mounts the volume then spits it out the 10Gb  port in the VMware host.

This is why multiple VM proxies (1 per host are recomended) are needed to go this route. If it isn’t on the same host it uses NBD.

It will also be using the VMware Management ports, with overhead in this case.

 

If you have 1 Gb management ports, it’s slow….. that speed is within the 1Gb range. 1Gbps = roughly 125MB (Theoretical max) 

 

10Gb = 1200MB so you will see around 1GB in Veeam.

 

Check your transport modes, make sure your proxy that you are using and VMware management ports are 10Gb networking.

 

What I do is thin provision everything on the SAN itself to save space, but create my VM’s thick provisioned. It’s a double bonus that they don’t expand and blow up the datastore too. 

 


Comment