Replies posted by dloseke
P.S: I am answering so late because I just got this. It seems like (agaaain) I don't receive notifications!!! I think twice in the past two weeks I’ve clicked the Unsubscribe from Notifications link in the notifications email instead of the link to open up a topic…...and then I have to go find the notifications setting again to resubscribe.
I feel like it should be noted that it is this way for every M365 backup product I’ve seen. It does, more or less, incremental backups to grab each file that has been added or changed. M365 backups don’t run in the same fashion of traditional backups like you would use for instance, if you were backing up an Exchange server as a VM, or with an agent even. It’s only looking inside of the mailboxes/OneDrive/SharePoint accounts/repositories for the data that has changed, etc.
Professionally one of my goals is to finish my VMCE certification….still not there yet but I need to do it very soon….should have it done by the end of September. I’m also going to be vetting out some product offerings such as deploying Veeam Backup for Office 365 as a service provider but using Wasabi for the storage, as well as learning more about and probably deploying Veeam Disaster Recovery Orchestrator. As for things I’ve actually completed, I had to do some major infrastructure refreshes at two of our locations and with that I finally have CDP replication running, and it seems to be running pretty well with a 15 second RPO. Quite pleased about that, vs the 4 hour snapshot replication we had in place for the same machines. And we have our backups now copied to Wasabi immutable object storage. I do a lot of the immutable backups for clients, but it seems like our internal stuff always gets neglected so I’m happy to be able to put some focus on that amongst our mass of client
For my customers, they are using 30 or 90 days immutability. GFS retention is keeping much more in most cases up to 1 or 3 years for some depending on how much they are willing to pay, but that’s more of archival of data rather than data recovery. Cost is the primary reason a lot are using object storage over VCC in most cases….they can keep more data for cheaper with object, and they want immutability now. The immutability period is more from a recovery standpoint - assuming they are immutable for 90 days, then that data can’t be touched for 90 days. If I had clients looking for more of an archival function, then we’d be talking about Glacier with immutability or possibly tapes, etc. I have one client that recently started to discern between archiving and recovering data. They have archival VM’s that keep 7 years of data, and those VM’s are backed up to tape and rotated yearly, so at any given time, they can go back nearly 8 years if needed. They were going to try and keep the VM’
You can still update the license key though if you need…..I just ran into this internally. We no longer have an Enterprise Manager but the key we had was managed by the EM. There was a button I could select to still supply the license key as I was changing over to VUL’s from perpetual licensing. I still need to open a ticket with support to get the query to remove the EM entry from our deployment. But that’s the answer…..
I’m pretty constantly revising my own plans for backup architecture. Moving away from Synology NAS’s has been helpful but I was unfortunately down that track for about 3 or 4 years because that was always how we did it, but we finally convinced people to start using purpose-built Dell servers with local storage. That said, getting folks to put the VBR Backup server at the recovery site instead of the primary site has been an issue in some cases, but that’s getting easier. Using REFS was usually not an issue, but I have a coworker that has had issues with REFS in the past and is very afraid of it, and I’ve run into it as well with Microsoft’s various patches that cause REFS volumes to show as RAW, etc.Now I’m starting down the road of the correct architecture for Linux Native Immutable backups. My plan that was down for a client was that at the primary site there is a purpose-build Dell server with local storage. Great. The NAS that they are currently using for primary storage (wi
I have old numbers from my VMCE certification notes that I put into some training materials for when I was training my team, and that presentation is from January 2021, so the numbers have to by much higher by now I would think. This reminds me...I need to actually get around to taking the VMCE exam….. VMCE: 858VMCA: 77
I can’t really speak for backups, but I would assume it would follow what I do with the actual mailbox. Many of my clients just delete the mailboxes so I wouldn’t be concerned with the backups sticking around. If it’s something that they need to keep for archiving or future access by others/replacements, then I convert the mailbox to Shared, unlicense and assign permissions for whomever needs access if anyone, but of course I would still want the shared mailboxes backed up. It’s really a case-by-case of the business need to retain that data, or not.
Hold up…..so are we saying that all VM’s on a licensed socket is covered for agents? If so, I learned something today. I knew about the “up to” 6 free agent licenses with perpetual licensing, but wasn’t aware of all VM’s on a licensed socket being covered for agents. @JMeixner @regnor This behavior changed with v10 or v11 I think. Before that you would have had to license an agent if it's used in a virtual machine. The disadvantage on the other hand is, that you would have to license a whole ESXi host with socket licensing even if there's only a single VM which you want to backup via agent. Okay, fair enough. I came into Veeam 5 years ago with 9.5 U4 but didn’t spend a ton of time with it and got a lot more involved during the release of v10.
Aha, ok, 6 is maximum. Too small for my clusters, I have the maximum with each cluster 😂😂😂 Most of my customers are in the SMB space, so many only have 2-3 hosts in a cluster and use VMware Essentials kits and Veeam Backup Essentials in most cases, so there is a 6 socket max there. Most are licensed perpetual but we’re converting some, obviously selling VUL’s new, etc. We do have two clients on VCSP but rental licensing is more of a one-off for us….used for those few select folks that don’t want to buy or only needed a couple VM’s protected (VBE packs reduced to 5 workloads has reduced that need even more), or for us to use in a pinch when we need a solution right away and the client hasn’t purchased yet.
It is another thing if you want to use an agent on a hardware server, then an instance is used. The socket based licenses have 6 free instances included. So, you can use 6 server agents - or 18 workstation agents. I believe that’s assuming that’s if you have 6 sockets licensed. If you only have 4 sockets licensed, then it’s up to 4 free instances, correct?
Hold up…..so are we saying that all VM’s on a licensed socket is covered for agents? If so, I learned something today. I knew about the “up to” 6 free agent licenses with perpetual licensing, but wasn’t aware of all VM’s on a licensed socket being covered for agents.@JMeixner @regnor
ISCSI is the way (unless you can use local disks) - better performance than SMB and NFS, I believe mostly due to multipathing. NFS would be preferred over SMB if you had to use a network protocol. That said, how is your volume formatted? REFS has been known to occasionally cause issues on ISCSI volumes to NAS’s and can be seen in the health checks at the end of the backups, or so I’ve read per @Gostev. With that said, I haven’t personally experienced it (to my knowledge...some of that is semi-transparent), but going forward, any NAS repo’s I have will be using NTFS and not REFS…..possibly might try XFS with a linux repo server, but haven’t done the research and tried it out yet. Edit: Just read your comment above and noted you’re trying to disable caching on the drives. To my knowedge, I haven’t see that issue and I have serveral Synology and QNAP NAS’s in place across my client base. Not to say it wouldn’t happen, but if you need me to check on any of my Synology’s, I certainly
Yes, I used direct storage access now, though I haven’t setup one specifically on Nimble yet but I have a client I plan on enabling that for down the road. Note that what I’m using is accessing the VMDK’s via ISCSI after the snapshot has been taken of the VM. There is also deeper integration available where you backup from the the storage snapshot that the array takes of the datastores, but I haven’t been doing that.With that said, one of my clients ran into an issue recently where I believe Veeam was initiating VM snapshots at the same time roughly that the array was taking storage/volume snapshots and it caused some sort of issue. I don’t know all of the exact details and the resolution as I’m relatively hand’s-off in that environment, but I know they bounced off of me for ideas. I’d certainly recommend reading the best practices from both Veeam and Nimble on this integration. I seem to remember it being something about when presenting the datastores to the Veeam host for direct
Maybe I’m wrong, but with a quick migration isn’t time halved instead backup copy job/restore? It’s an interesting topic @dloseke :) That’s an interesting thought. I guess I haven’t used QuickMigration enough to tell if there would be a difference between that and a copy job. At least with the copy job there should be no restore as it can just us the backup repo for seed data. I actually do that quite often if I have copy jobs targeting the same location as replica’s. That said, I suspect the WAN link or whatever is going to be the major contributing factor. I’m not sure what kind of compression is available with QuickMigration. Perhaps something interesting to test out….you know….in my “free time”. And by free time, I mean when I should probably be sleeping. :-)
This is a good idea. My concern is that if it takes a LONG time to copy it across whatever link is there, then a Quick Migration may end up with a large snapshot to commit at the end when it’s done (assuming that the source VM isn’t deleted which I think would be the case since the OP is trying to create a replica). With a backup copy job, it may take a long time for the copy to complete, but at least the source VM would be unaffected. Physical copy is going to be more work intensive, but probably faster.
(Google Translation) I am migrating some virtual servers from one location to another with a weight of 4 teras each and it takes a long time to transmit, I can take the full from the place of origin to the destination place and then execute the incremental ones. Is there a procedure to do it? Apologize as my Spanish is minimal. I would think that you would be able to have a repository at whatever location you’re targeting and then should be able to create a backup copy job to that location and use that backup copy as a seed. If that is not an option and you need to physically transport the data, I would take a full backup of the VM in question, physically move the VBK to the repository at the target location, and then rescan the repository. Once you have the VM showing as a restore point in the repository at the target location, I would think you should be able to use that as seed data for the replica. Let me know if that works for you as I haven’t tried this but I’m not sure tha
@Rick Vanover I posted a quick update. After getting the IO Filter versions in sync between the sites, I had to blow away my initial sync and start the sync over but it is now working. Now it’s just a matter of fine tuning the RPO policy and when I get alerts. I got a LOT of failed and success emails the first day or two because I’m attempting a 15 second RPO and it’s having a hard time keeping up at times, so I may have to back it off a bit. I did end up changing my warning and failure alerting period to 2 minutes and 5 minutes respectively so my emails don’t get quite so blown up.
Final update….I have it now working. It turns out that the errors I was getting after resolving the above had to do with the VM not being attached to the disks. For instance, the VMX was pointing to SERVER1-interim.vmdk but I don’t believe those disks existed for some reason or had a different name (don’t recall exactly what happened). When I tried to edit the VM to remove the disks and then reconnect to the disks, it wouldn’t connect. In the end, I blew away the VM, deleted the disks and remaining files from the datastore and then reseeded/replicated the VM from scratch and it began working normally. Now it’s just a matter of fine tuning the RPO policy and when I get alerts. I got a LOT of failed and success emails the first day or two because I’m attempting a 15 second RPO and it’s having a hard time keeping up at times, so I may have to back it off a bit. Which is fine...with snapshot replication, we had these at every 4 hours before implementing CDP, so a 1 minute or 5 minute
Login to the community
Log in with your Veeam account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.