Hi Community -
Yes..I ask questions too! 😉
I recently noticed a subset of my Backup Jobs performance going from hundreds of MB/s to just 5-10MB/s since last Fri. I updated my Veeam environment on Sun and initially thought it happened after that (I noticed yesterday a few jobs took 4-14hrs to run!), but after further investigation I found the perf drop started 1st thing Fri morning. I have 2 other subsets of Jobs which run fine. Each of the 3 “sets” of Jobs use different Proxies and Repositories. So this whole subset which use a certain physical Linux Proxy and physical Linux Repo has this performance issue. Another subset which uses a different set of physical Proxy and Repo boxes are fine. And the last subset of Jobs which use hotadd are fine. So I’ve obviously narrowed down the issue is either the physical Proxy or Repo on this subset. The Job stats went from bottleneck of Source in the 90% range to Target in the high 90% range. I’ve checked my Proxy frontward and backward. All configs seem fine. My network seems ok. Nics and HBAs are “up” and seemingly working fine. My multipathing on it and the Repo seem fine. Connection to my prod storage array (Proxy) and backup storage array (Repo) are still there...again no changes were made there. The network backbone is 10Gb. Storage network is isolated so no other traffic congestion traverses it.
I am working with Support, but was wondering if anyone has experienced this issue before and what was done to resolve? I have experienced a similar issue before and the the cause was the network (fiber, believe it or not) over a short-range internal WAN (to our DR site). But, this problem subset of Jobs backs up to a ‘local’ array. Still could be network-related I guess...bad twinax cables I guess? Anyway...going to attempt some kind of speed test to see, but again...curious if others had an issue like this and the resolution.
Thanks all.