What is your Proxy server setup like or is it all in one on the Veeam box? Also maybe try setting a GW server for the NFS shares that are close to the storage array to see if that changes the backup window.
What is your Proxy server setup like or is it all in one on the Veeam box? Also maybe try setting a GW server for the NFS shares that are close to the storage array to see if that changes the backup window.
Hello Chris,
There's no Proxy, it's a application plugin backup job. Regarding the GW it's auto, then it select the VBR for Plugin backup.
The problem is any trying will take minimum 10 hours to evaluate.
What is your Proxy server setup like or is it all in one on the Veeam box? Also maybe try setting a GW server for the NFS shares that are close to the storage array to see if that changes the backup window.
Hello Chris,
There's no Proxy, it's a application plugin backup job. Regarding the GW it's auto, then it select the VBR for Plugin backup.
The problem is any trying will take minimum 10 hours to evaluate.
Well then not sure what to suggest if the issue is taking the time to test things. You need to tweak and try stuff to see about getting this window down. If a one-click thing is required that is not possible.
Hey!
Ive just been going through the same challenges. What OS is your RMAN running on?
Hey!
Ive just been going through the same challenges. What OS is your RMAN running on?
Hello @MicoolPaul
Redhat Linux
Okay, was wondering as I’d just done a lot of optimisations with AIX.
When using Networker is that also using NFS to the backup target? I believe that when you’re working at that scale you’d be better off if you’re not creating a second network connection dependency for writing data. Other considerations apply around NFS too for scaling.
How did you land on 10 channels for that database size? I’m working with a 16TB database that has more channels currently.
I’d start by looking at the configuration differences between networker and Veeam.
Let’s compare the backup target storage, number of channels, configs such as FILESPERSET too.
I’d also be curious to know whether you’re comparing like for like with compression. Are you using RMAN or Veeam compression? And what are you using for networker?
Don’t know if you’ve seen these resources too:
https://bp.veeam.com/vbr/4_Operations/O_Application/oracle.html#veeam-plug-in-for-oracle-rman
https://fromthearchitect.net/wp-content/uploads/2022/09/Oracle_and_Veeam.pdf
Hello @MicoolPaul
Networker was backing up to Datadomian and it will be discontinued.
I used Veeam basic compression, I assume Networker was rely on Datadomian
Yep if you’re using a deduplication appliance such as Data Domain you’d disable compression and let the Data Domain handle that, appreciate it’s not “like for like” with post processing de duplication, but have you tested the performance without compression enabled at RMAN nor Veeam?
Yep if you’re using a deduplication appliance such as Data Domain you’d disable compression and let the Data Domain handle that, appreciate it’s not “like for like” with post processing de duplication, but have you tested the performance without compression enabled at RMAN nor Veeam?
I tested Veeam compression only.
So compression will impact CPU. So if your source is being throttled by CPU cycles available that will become your bottleneck.
If you view the job within VBR. What % busy does it state for source, network, and target? Gives us an idea where your best performance gains will come from
So compression will impact CPU. So if your source is being throttled by CPU cycles available that will become your bottleneck.
If you view the job within VBR. What % busy does it state for source, network, and target? Gives us an idea where your best performance gains will come from
Bottleneck is the target
I suspect that’s because of the NFS setup you’re using. You’d see better performance with the disks appearing directly to the repository/repositories over iSCSI or Fibre Channel and backed with multipathing.
I suspect that’s because of the NFS setup you’re using. You’d see better performance with the disks appearing directly to the repository/repositories over iSCSI or Fibre Channel and backed with multipathing.
Same idea as @MicoolPaul , i think the Oracle cluster is sufficiently garnish in compute (ram + cpu) to store this amount of data. I will suggest to raise the number of rman channel a lot, 10 channels is clearly not enough from my pov to operate backup and restore.
Plug-in database server 1 CPU core and 200MB of RAM per concurrently used channel.
Did you open of support case to deep dive check the performance by an dedicated rman support engineer?
Share the case number i’m pretty sure PM team will check it and give you a review.
Hello Everyone,
The backup window decreased from 16 hours to 10 hours by the below changes:
Using Linux server as Gateway.
Using 24 RMAN channel instead of 10.
Using Veeam Compression.