VeeamON 2024 | #BounceForwardThrowback (Win Custom Shoes!)
Ok gang, I have a weird situation.I have a job where I removed a server and added another one that replaced it. Different server names, IP’s….Everything was running fine for a long time but all of a sudden, when the job runs, it completes without issue. Then a second job runs with the same job name but showing the previous server and job settings (which of course fails due to the server no longer being there). It’s even doing the typical 3 tries before it gives up.Any ideas on how to get rid of the second run?Thanks in advanceJD
Is there a way to set/adjust the spacing on the Veeam O365 job reports? Seems like a lot of unnecessary word wrapping going on.
Going through the motions of upgrading our Nutanix AHV to V5. The Nutanix clusters are already done, attempting to upgrade the AHV portion. Downloaded the latest version yesterday, installed without issue. Veeam is reporting that the Nutanix AHV Proxies need to be upgraded (as expected) but when we try and upgrade, the upgrade is trying to pass off an incorrect user ID (ROOT) to the Proxies which of course isn’t valid and the upgrade process errors out with an Incorrect Username/PW. Ticket has already been opened but support is taking a little too long to respond with a fix.Any ideas?
Had one of my security guys run this by me to advise the backup team about a potential issue that might be rolling around. This is the MS KB5025885 in question. https://support.microsoft.com/en-us/topic/kb5025885-how-to-manage-the-windows-boot-manager-revocations-for-secure-boot-changes-associated-with-cve-2023-24932-41a975df-beb2-40c1-99a3-b3ff139f832dWill this effect BDR boot ISO’s or any issues with backups that now have an incremental with the changes shown in the article?
We have Veeam pushing a couple of Nutanix clusters. Jobs are split between the clusters and Nutanix is replicating between the two clusters so if one goes down…Issue that we’re seeing is when Veeam is backing up a server, snaps are being created to do the backup. Replication between the two Nutanix clusters is seeing the snap and replicating it to the second node. Since the snap doesn’t have a retention set, Nutanix is setting the snaps for 60 day retention. When we’re starting to deal with file servers north of 20tb, this becomes and issue. Is there anyway to force Veeam to set a short retention on snaps or is this something Nutanix should address?Thanks in advance.
This is more of a processing issue but . . . I have one VBR system that processes some rather large file servers. We’re going to be setting up Scale Out Repo to Wasabi into an object locking bucket.When Veeam locally merges the latest incremental into a new full, does it push the complete newly created full out to the bucket or is there some magic that happens where Veeam is only updating the current full that’s sitting out in the cloud. People with small internet pipes want to know. With a small domain server it’s no big deal to push 20gig but when it’s 25TB…..I’m also going to assume since Veeam is in control of the object locking and retention, I won’t have to figure out when things will be unlocked and the magical merge is going to happen.Thanks ahead of time.
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.