I attended the Veeam Toronto event recently and met some great people there and thought it might be time to join the Community. Nice to be here and hope to spend some time giving back in the future. For now, trying to work my way through the Veeam 13 upgrade as it’s the first where I’ve run into a major issue since starting with 9.5.
I upgraded VeeamONE, Backup Enterprise, then B&R from 12.3 to 13 yesterday. I have run into a couple of issues and currently have a support ticket open but thought I would highlight some growing pains here as well as ask if anyone has seen this yet. The B&R upgrade process went smoothly until I got to the point where I opened the new console.
- First problem, logged in and console opened fine(after running as Administrator) but I lost access to my Nutanix AHV clusters and backup proxy machines int the backup infrastructure. I have a mix of AHV and VMware clusters and the VMware side had no issues - my B&R VM and another Windows-based proxy used for my VMware environment ran the automatic updates without issue. Support quickly provided KB4687 which had the fix noting the certificate generated from a “really old version” may be missing the “Basic Constraints” field.
- Easy 1-minute fix in the KB. I logged into the console and could now see my AHV clusters and proxies, great
- Next problem: When I logged into the console and the AHV plug-in issue clearly resolved, was prompted to upgrade my AHV proxy VMs similar to what I’ve seen in the past updates in v12.
- The auto-upgrade failed so my AHV backups started failing due to incompatible AHV proxy VMs. VMware-based backups are running no problem.
- I’m unable to edit/see details of existing AHV backup jobs and got an error
- Updated my support case and while waiting for an answer back, did some digging and found info saying to remove old AHV proxies, create new ones, just don’t remove the associated Nutanix cluster under Managed Servers. It mentioned to run a rescan on the cluster once the new proxy is there and the existing backup jobs should associate with the new proxy. I have a couple of non-prod clusters each with multiple jobs, so figured I’d give it a shot. ***DON’T TRY THIS YET***
- Cluster 1: Removed old proxy, created new proxy from console all went well but the backup jobs associated with the cluster disappeared, and the associated backups now show under Disk (orphaned).
- Cluster 2: I read more and found the suggestion to not remove the old proxy first. I tried again on one more non-prod cluster while waiting for support update. This time, I added the new proxy first, rescanned the cluster, still could not edit the existing job saying it is associated with a proxy that requires a major upgrade. I thought, OK maybe now I can delete the old proxy, rescan the cluster and that will work...nope. Backup jobs again disappeared and backups moved to Disk (orphaned).
I’m now waiting on support since updating the ticket yesterday. Worst case, it looks like I will need to create new backup jobs as the old ones are currently stuck associated with the old AHV proxy VMs. I suggest making sure you are prepared for potential issues like this if working with AHV backups.
If anyone has seen this and has a quick fix, please let me know and I will give it a try but I’m about out of time waiting as I may just need to move on and recreate all my AHV jobs to avoide more backup misses.
