Solved

VEEAM is up and running in my environment - any advice on how to speed up my backups?


Userlevel 5
Badge

Hey VEEA-ers,

 

I finally got VEEAM installed and running and configured and I am starting to kick off jobs, but my speeds are slow (100mbps) or when writing

 

I have a full 10gb backbone and a dedicated NIMBLE specifically for my VEEAM data and I have a physical server with 4x 10gb NICs on it (2 for management and 2 for data)

 

any ideas as to why it could be slow? I am doing some troubleshooting, but was curious if anyone had any thoughts before I went down a rabbit hole

 

thanks guys

icon

Best answer by coolsport00 22 August 2023, 20:08

View original

39 comments

Userlevel 7
Badge +8

i hit 3GB at peak, I’ll hit 2GB often, and 1GB is common, however, depending on how much data is running per job, and concurrency i see as low as 500MB. 

 

As stated WAN accelerators only help for shipping data offsite, and if you have a 10Gbps connection YMMV. I don’t use one and send a bunch of data to other locations without issue. 

Userlevel 7
Badge +17

And, as Chris shares, you can add your Nimbles to Veeam (I do)...so you can utilize Backup from Storage Snapshots (BfSS); speeds don’t necessarily top much more than if you were using DirectSAN..and sometimes even when using hotadd as well. But, it does take load off your VMs from snapshotting.

Userlevel 7
Badge +6

Hello @clownyboots178 

Could you please share a screenshot of the backup job? we need to know what is the bottleneck; source, network, proxy, or target. For example:

 

Userlevel 7
Badge +17

Did you configure your Repositories to use ReFS or XFS?

Are you using DirectSAN Transport Mode?

Is your network connection a dedicated network for storage only?

You can look at the vSphere User Guide for Veeam (assuming you’re using VMware?) to see how to configure DirectSAN

You can also review the Best Practice Guide

Userlevel 7
Badge +20

Also with Nimble you can do storage integration as well which would really speed up backups.  Check these links -

https://www.veeam.com/blog/nimble-storage-configuration-recovery.html

https://helpcenter.veeam.com/docs/backup/vsphere/nimble_add.html?ver=120

 

Userlevel 5
Badge

Bottleneck on every job is “source”

Userlevel 7
Badge +20

Bottleneck on every job is “source”

So when it reads the VM files for backup. Trying Nimble integration should help.

Userlevel 5
Badge

Direct SAN is enabled and rocking and rolling 

we are reading and writing at 1gbps or more and it’s glorious

 

dumb question : anyone ever seen faster speeds?

if so, how?

 

thanks for all of the help and advice guys 

 

 

Userlevel 5
Badge

I am talking with someone now about it all

 

You guys have been a terrific help and always are here to help me out with things

 

thanks for all of the advice, things are moving much better now

 

thanks again guys

Userlevel 7
Badge +8

It’s trying to use DirectSAN, failing, then failing over to VMware Backup Proxy.

Now, depending on how “VMware Backup Proxy” is configured, that is your issue. 

For me, “VMware Backup Proxy” in this case would be a VM, if it was on the same HOST as the VM being backed up, it would use [HotAdd] and be quite fast. It essentualy mounts the volume then spits it out the 10Gb  port in the VMware host.

This is why multiple VM proxies (1 per host are recomended) are needed to go this route. If it isn’t on the same host it uses NBD.

It will also be using the VMware Management ports, with overhead in this case.

 

If you have 1 Gb management ports, it’s slow….. that speed is within the 1Gb range. 1Gbps = roughly 125MB (Theoretical max) 

 

10Gb = 1200MB so you will see around 1GB in Veeam.

 

Check your transport modes, make sure your proxy that you are using and VMware management ports are 10Gb networking.

 

What I do is thin provision everything on the SAN itself to save space, but create my VM’s thick provisioned. It’s a double bonus that they don’t expand and blow up the datastore too. 

 

Userlevel 7
Badge +17

Also, in your jobs, on the Storage section of the job > Advanced button, vSphere tab, make sure CBT is enabled (should be by default).

Userlevel 5
Badge

thanks for the quick reply

 

Did you configure your Repositories to use ReFS or XFS?

ReFS is what everything is formatted for (that was recommended)

 

Are you using DirectSAN Transport Mode?1

Looking into setting that up, not sure how to do it exactly 

 

Is your network connection a dedicated network for storage only?

Verifying that now, this has its own dedifcated VLAN just for backups as well so the goal is to have it nearly fully separated from the rest of the network 

Userlevel 7
Badge +9

HI @clownyboots178  

Apart from verifying why you go to 100Mb .

 you can and must (in my opinion) enable Hardware Snapshot with Nimble integration.
the advantage is that you don't have nspashot open for moto time and you can make backups at any time of the day even during peak working hours.
In addition, from the HW snapshots you can create new jobs e.g. Copy Job/Application aware etc.

Veeam and Nimble storage Deployment guide

Veeam and Nimble Storage integration: First-hand backup and replication

Hardware Snapshot Orchestration – A game-changer from HPE storage and Veeam

regards

Userlevel 7
Badge +9

Bottleneck on every job is “source”

Install Veeamone Monitor helps diagnose the backup infrastructure and its hypervisor.
Do you use Vmware or HyperV?
Are the datastores FC, ISCSI, NFS?

Do you use virtual appliance mode?

Performance Bottlenecks - User Guide for VMware vSphere (veeam.com)

 

regards

Userlevel 5
Badge

I am working through the install of DirectSAN to see if that helps, we will be going from PURE storage that houses all of our data to our NIMBLE that house only backup data

I also changed he concurrent tasks from 4 at one time to 112 and it is flying now

just need to figure out how to make a single machine backup preform quicker

Userlevel 7
Badge +20

Direct SAN is enabled and rocking and rolling 

we are reading and writing at 1gbps or more and it’s glorious

 

dumb question : anyone ever seen faster speeds?

if so, how?

 

thanks for all of the help and advice guys 

 

 

There was a ‘beat the Gostev’ challenge that Veeam ran, which saw a couple of submissions of over 100GBps. The winner was 147.7GBps, so Veeam does scale VERY well.

 

Make sure you’ve got Multipath IO enabled so that you can write to your Nimble via multiple channels. Are your data interfaces shared for access to Nimble and your production storage?

 

A diagram of your topology would be a good place to identify potential bottlenecks. There’ll always be a bottleneck but it’s got to be fast enough to meet your needs. I just did a deployment that was seeing 4-5GBps throughput per site, it was with a couple of HPE Apollos per site, with multiple RAID Controllers and RAID volumes per HPE Apollo, with 40GbE and 32Gbps Fibre Channel to read from production. The bottleneck is source! 😁

Userlevel 7
Badge +20

Hi, I notice your screenshot says “Using backup proxy VMware Backup Proxy”, this is the default proxy deployed when you install Veeam. Is this intentional that you’re using this server as a proxy? Or do you have Veeam as a virtual machine for example, and then a physical server as your proxy & repository?

Userlevel 7
Badge +9

I dont know what happened exactly, but I went from 1gbps read and 950mbps write speeds to 80mbps write or even lowere

any idea why that might have happened?

 

My job also says “DirectSAN is not available failing over” but I have failover turned off

 

 

 

Hi @clownyboots178 

do proxies and vbr have access to proxies\datastores?

Have you set proxy-side direct storage access mode?

Backup Proxies -> VMWare Backup Proxy -> Transport Mode

and in the job check this.

ATTENTION.

Only the VM disks "Thick Provision Lazy Zeroed" and "ThickProvision Eager Zeroed" can be used in Direct SAN mode.

check this:

Direct SAN Access - User Guide for VMware vSphere (veeam.com)

 

Userlevel 7
Badge +17

Here are a couple older posts, but still mostly relevant on enabling Direct SAN using iSCSI:

https://veducate.co.uk/how-to-setup-veeam-direct-san-backup-over-iscsi-unleash-the-speed/

https://www.danilochiavari.com/2014/01/20/configuring-direct-san-backups-in-veeam-br-for-vmware-vsphere/

Userlevel 7
Badge +6

Bottleneck on every job is “source”

What is the source? VM, Physical server, Physical workstation, File server? how do you backup it; via Agent or agentless? if Agentless, what is the hypervisor? what is the datastore type; local disks (HDD or SSD), iSCSI, or what? 

If it is VM, where is the Proxy server installed? inside the Hypervisor or on Physical server?

Userlevel 5
Badge

“I also changed he concurrent tasks from 4 at one time to 112 and it is flying now” < This! You want to keep concurrent tasks set as high as your resources can handle. The only latency-type config I would make is to your storage latency. In your VBR menu > General Options > I/O tab, I would config your “stop assigning new tasks” option to no more than 30ms; I have 20ms, but sometimes 30ms is ok. Otherwise you’ll tend to notice latency within your VMs, or the apps your VMs run.

Isnt this only pertaining to backups that live on the same storage as production data?

Userlevel 7
Badge +17

So basically, Veeam needs to read the data from the Datastore VMs reside (snapshotting, etc). As such, I/O control can be configured to not allow Veeam unfetted I/O to read such data on your production datastores in which VMs reside.

Userlevel 7
Badge +20

we are working on setting up directSAN now and hope to have it running today, the biggest concern is PURE wants thick provisioning to be enabled on drives, when that is hardly used, so its taking a bit longer to digest how it all works in our environment

any use in setting up WAN Accelerators?

If everything is local WAN accelerators won't help. Check here for information - https://helpcenter.veeam.com/docs/backup/vsphere/wan_accelerator.html?ver=120

 

Userlevel 7
Badge +20

How is the storage you are backing up from connected to the environment?  iSCSI or FC?  Might want to look there since it failed over to Network mode.  There is a connection to the storage preventing the Direct SAN access.

Userlevel 7
Badge +20

Hi, I notice your screenshot says “Using backup proxy VMware Backup Proxy”, this is the default proxy deployed when you install Veeam. Is this intentional that you’re using this server as a proxy? Or do you have Veeam as a virtual machine for example, and then a physical server as your proxy & repository?

This is intentional, we have a dedicated Dell box that has way more horsepower than it needs and we did that so it could be the proxy and handle anything we throw at it

 

No network changes as this happened every night this week between the same time frame with all other jobs

 

what should I look for in the logs?

 

So this Dell box is the VBR server + Proxy correct?

Comment