VEEAM is up and running in my environment - any advice on how to speed up my backups?
Hey VEEA-ers,
I finally got VEEAM installed and running and configured and I am starting to kick off jobs, but my speeds are slow (100mbps) or when writing
I have a full 10gb backbone and a dedicated NIMBLE specifically for my VEEAM data and I have a physical server with 4x 10gb NICs on it (2 for management and 2 for data)
any ideas as to why it could be slow? I am doing some troubleshooting, but was curious if anyone had any thoughts before I went down a rabbit hole
thanks guys
Page 1 / 2
Did you configure your Repositories to use ReFS or XFS?
Are you using DirectSAN Transport Mode?
Is your network connection a dedicated network for storage only?
You can look at the vSphere User Guide for Veeam (assuming you’re using VMware?) to see how to configure DirectSAN
Also, in your jobs, on the Storage section of the job > Advanced button, vSphere tab, make sure CBT is enabled (should be by default).
thanks for the quick reply
Did you configure your Repositories to use ReFS or XFS?
ReFS is what everything is formatted for (that was recommended)
Are you using DirectSAN Transport Mode?1
Looking into setting that up, not sure how to do it exactly
Is your network connection a dedicated network for storage only?
Verifying that now, this has its own dedifcated VLAN just for backups as well so the goal is to have it nearly fully separated from the rest of the network
And, as Chris shares, you can add your Nimbles to Veeam (I do)...so you can utilize Backup from Storage Snapshots (BfSS); speeds don’t necessarily top much more than if you were using DirectSAN..and sometimes even when using hotadd as well. But, it does take load off your VMs from snapshotting.
Hello @clownyboots178
Could you please share a screenshot of the backup job? we need to know what is the bottleneck; source, network, proxy, or target. For example:
Here are a couple older posts, but still mostly relevant on enabling Direct SAN using iSCSI:
you can and must (in my opinion) enable Hardware Snapshot with Nimble integration. the advantage is that you don't have nspashot open for moto time and you can make backups at any time of the day even during peak working hours. In addition, from the HW snapshots you can create new jobs e.g. Copy Job/Application aware etc.
What is the source? VM, Physical server, Physical workstation, File server? how do you backup it; via Agent or agentless? if Agentless, what is the hypervisor? what is the datastore type; local disks (HDD or SSD), iSCSI, or what?
If it is VM, where is the Proxy server installed? inside the Hypervisor or on Physical server?
I am working through the install of DirectSAN to see if that helps, we will be going from PURE storage that houses all of our data to our NIMBLE that house only backup data
I also changed he concurrent tasks from 4 at one time to 112 and it is flying now
just need to figure out how to make a single machine backup preform quicker
“I also changed he concurrent tasks from 4 at one time to 112 and it is flying now” < This! You want to keep concurrent tasks set as high as your resources can handle. The only latency-type config I would make is to your storage latency. In your VBR menu > General Options > I/O tab, I would config your “stop assigning new tasks” option to no more than 30ms; I have 20ms, but sometimes 30ms is ok. Otherwise you’ll tend to notice latency within your VMs, or the apps your VMs run.
“I also changed he concurrent tasks from 4 at one time to 112 and it is flying now” < This! You want to keep concurrent tasks set as high as your resources can handle. The only latency-type config I would make is to your storage latency. In your VBR menu > General Options > I/O tab, I would config your “stop assigning new tasks” option to no more than 30ms; I have 20ms, but sometimes 30ms is ok. Otherwise you’ll tend to notice latency within your VMs, or the apps your VMs run.
Isnt this only pertaining to backups that live on the same storage as production data?
So basically, Veeam needs to read the data from the Datastore VMs reside (snapshotting, etc). As such, I/O control can be configured to not allow Veeam unfetted I/O to read such data on your production datastores in which VMs reside.
‘Bottleneck on every job is “source”; As far as the ‘bottleneck’ stats...you’ll never have zero bottleneck. Those stats are used as a gauge to help you troubleshoot when you do run into backup performance issues. Gives you place to start looking, in other words.
“I also changed he concurrent tasks from 4 at one time to 112 and it is flying now” < This! You want to keep concurrent tasks set as high as your resources can handle. The only latency-type config I would make is to your storage latency. In your VBR menu > General Options > I/O tab, I would config your “stop assigning new tasks” option to no more than 30ms; I have 20ms, but sometimes 30ms is ok. Otherwise you’ll tend to notice latency within your VMs, or the apps your VMs run.
This is definitely important for sure with storage integration. Take this advice for sure as Shane knows.
I use ReFS formatted at 64k as per best practice.
Direct SAN is preferred method if you have the license.
I’m usually hitting about 2GB transfer speeds. I’m limited by my target SAN most of the time that is about to be replaced
If the source is your bottleneck, what are your VMware hosts management speeds? what transport method is it working? I had1Gbps management on some of my older hosts and the speeds were awful if I was using the wrong transport mode.
If your not using DirectSAN you want to use Hotadd and have a proxy on every host if you can.
we are working on setting up directSAN now and hope to have it running today, the biggest concern is PURE wants thick provisioning to be enabled on drives, when that is hardly used, so its taking a bit longer to digest how it all works in our environment
any use in setting up WAN Accelerators?
we are working on setting up directSAN now and hope to have it running today, the biggest concern is PURE wants thick provisioning to be enabled on drives, when that is hardly used, so its taking a bit longer to digest how it all works in our environment
WAN Accelerators are generally used for offsite jobs; even if you run backups to remote site, but are on a layer 2 network, there’s still no need for them. Just make sure to have Proxies at the source & target ends of the backup path for best performance
Direct SAN is enabled and rocking and rolling
we are reading and writing at 1gbps or more and it’s glorious
dumb question : anyone ever seen faster speeds?
if so, how?
thanks for all of the help and advice guys
Direct SAN is enabled and rocking and rolling
we are reading and writing at 1gbps or more and it’s glorious