Replies posted by bp4JC
I assume that you are presenting the storage to the hosts that play the veeam vm the storage over SAN, iSCSI , NFS or whatever… right? isn't local disk on the host. Why don't you present that storage, as CIFS for example, as repositories, and then create a SOBR with all of them? depending on the Max capacity of the storage you are using to present the repos, you will have to size them to be compatible, for example, 5 CIFS shares of 20TB each. or NFS repos of 20TB each. hope this helps. It’s set as a VMDK currently. I was not aware of that. I think we might end up going the route that @MicoolPaul suggested. I had the same idea and my boss liked it. What you’re saying would also probably work as well. The direct iSCSI connection would be the easiest, I think?
Thanks for the extra detail @bp4JC. Aside from the obvious point of, I really wouldn’t recommend Veeam having a repo on a VMDK… You could create a backup job per VMDK, as multiple jobs, that would work. You’d just use an exclusion to process the VM but just one disk for example. Depending on how many disks and their size you could maybe set a job that processes all disks apart from your huge VMDK, then a separate job that only processes that disk. As you’ve said it’s an imaging server, so it’s not like you need application aware processing for MSSQL recovery or anything else specifically that requires a different solution. Is your VMDK being backed by a SAN? If so can you just present an iSCSI LUN directly to the Veeam Repo instead? This is exactly what I was thinking. Just setting it up using initiator in the OS. I think that is going to be the best course of action because this server is only going to get bigger. It is SAN.
Another update for you all. More specifically, this server is over 3 different datastores. We have the Veeam repository on a VMDK and the max size is 65TB. We need to be able to potentially split this backup over multiple repositories, or something similar to that. That being the case, would SOBR be the way to go?
Hi, even if you’re using per VM backups, the backup must sit entirely in one repo so SOBR won’t help there. But if your problem is the datastore running out of space due to snapshots, you could configure a different VMware datastore to host your snapshot for the duration of the job processing. does that help? 🙂 If I am interpreting the email from my boss, correctly, we are just outright running out of space on the datastore and we can’t make a datastore bigger than 65TB in VMware.
It’s a VMDK file. The backup runs fine up until this point and it just fails. One of the Hard Disks also shows errored in the job, but it doesn’t say what the error is. Also of note, the errored hard disk does not contain this file. When you look at the failure cause, this file being ‘missing’ is the reason it lists. I can go to vcenter, to the excluded drive, and the file is there. It’s very strange. I just ran a rescan of the host, in Veeam. I’ll see if that makes a difference.
Are we talking about agent or vm backups? For agent backups I would certainly create an active full backup as I wouldn’t trust CBT anymore. For VM backups an incremental run should also work as CBT is independet of the OS. The incremental will sure be much bigger as usual but that’s all. It’s a VM backup.
Thanks, guys. I was thinking that was the case. I ran into a situation where I had fairly large incrementals and when I looked at the way the backup was setup, “Entire Computer” was chosen as the backup option. Based on a similar situation I saw a while back, I figured it was backing up every listed volume, including the VBR repository volume. I changed over to volume level. Thanks again!
Hey! Without seeing the specifics of the deployment and whether or not it’s as simple as excluded the specific controller that has the disk etc, otherwise could you use Veeam Agent if it’s some sort of direct passthrough of the device? That way it can handle from within OS. Veeam agent is not a bad idea. Can you use that in conjunction with vcenter backups in the same VBR installation on a backup server? (I hope that makes sense...basically have an agent backup along with the other vcenter backups within VBR on the backup server).
Hello @bp4JC, have you seen this yet? https://helpcenter.veeam.com/docs/backup/vsphere/backup_job_excludes_vm.html?ver=110 I have. In that exclusions list, I am able to see the 4 datastores we have created in vmware. It’s the USB drive that is not showing up. I can tell that it’s being backed up due to the size of the incrementals that are occurring, but therein lies the issue. We really need to be able to exclude this drive. The data on it is temporary and gets deleted regularly with the application it is assigned to, but consequently, we are losing all of our disk space due to that drive.
Having your backups offsite is key in this scenario. A backup of your configuration file would allow you to get VBR back up and going in the event that you have to recreate your backup server. Once you have VBR back up, you should be able to “re-map\reseed” your backups into the jobs, in order to maintain your backup chain.
Over on the R&D forums, my username is Spooky Door. I stole this from a card game I played once. I can’t remember what it was called, but it was a goofy horror themed card game where you try and collect a certain set of cards and you win. One of the cards is called “The Spooky Door” and that really appealed to my sense of humor. bp4JC is a reference to my faith as a Christian. bp are my initials, and JC are the initials for Jesus Christ.
I opened the firewall on the hyper-v host side and then tried all the same steps as before. Still I receive the same error. Referring to Marcofabbri’s reply could my password with special characters really be the reason I am unable to connect this server? I tried avoiding using the previous two accounts linked because I cannot confirm they are secure enough or not. Can you verify that you are using Domain\Username format in the credentials? This may seem weird, but try something for me. The domain password you are using, type it into Notepad++ on the local host/server/hypervisor you are using, copy it, and then paste it into the credentials setup in Veeam on the backup server, and then save the credentials. Weirdly, sometimes copying and pasting is the best way to deal with passwords in VMs. Caps lock may be on, something weird may be happening behind the scenes. It is a best practice taught to me by an IT veteran. It sounds dumb, but give it a shot.
And for your question regarding tape. Veeam supports LTO only. Speed depends on the used LTO generation. LTO-6 has a speed of 150MB/sec, LTO-9 has a speed of 400MB/sec. And this per tape drive. And your primary storage has to be fast enough to read the data at this speed to utilize several drives. What kind of hardware specs/resources do you typically implement when you are setting tape backup infrastructure?
Regarding the original question @bp4JC was asking: doesn’t a backup copy job always only copy the latest backup state in the very first run and this will thus always be an active full? A single new VBK is produced. https://helpcenter.veeam.com/docs/backup/vsphere/backup_copying_process.html?ver=110 From my knowledge it’s the other way around: it’s not easy to have it copy the full chain with all the history… Here we would need some folder operations... That’s what I was thinking. That if I take a new active full on the source job and then a new active full on the copyjob, any pending restore points that have not been copied over will just stay locally and not get copied over. @marcofabbri I am holding off on the folder rename at the moment. I was talking to my CTO about all of this and he took a look at our cloud connect server and he is noticing some network bandwidth issues that he wants to address. I am going to hold off while we explore that. It
Regarding the original question @bp4JC was asking: doesn’t a backup copy job always only copy the latest backup state in the very first run and this will thus always be an active full? A single new VBK is produced. https://helpcenter.veeam.com/docs/backup/vsphere/backup_copying_process.html?ver=110 From my knowledge it’s the other way around: it’s not easy to have it copy the full chain with all the history… Here we would need some folder operations... That’s what I was thinking. That if I take a new active full on the source job and then a new active full on the copyjob, any pending restore points that have not been copied over will just stay locally and not get copied over. @marcofabbri I am holding off on the folder rename at the moment. I was talking to my CTO about all of this and he took a look at our cloud connect server and he is noticing some network bandwidth issues that he wants to address. I am going to hold off while we explore that. It’s a great suggestion though, and
Those sound like fantastic changes and should help significantly. You want to throw all of the resources at your Veeam server that you can. Something that occurs to me. What version of VBR (Veeam Backup and Replication) do you have installed? Version 11 is known to have some performance issues. If you run into an issue again, it might be worth uninstalling 11 and installing v.10a in its place, or even move to 11a. I don’t know how feasible this is, but it’s an idea if you are using v.11.
If I were to create a new copyjob to replace an old one, and I want it to only move over the very latest restore point and none of the previous restore points that are currently in the repository; can this be accomplished by taking a new active full on the source jobs and then running a sync now on the new copyjob and telling it copy over the latest? If not, can this only be accomplished by wiping out all current local and cloud and starting from scratch? [Edited] If you run active full, it should copy older ones anyway. I think you can rename source folder (so no wipe needed), launch a full and then copy job. That’s not a bad idea. I didn’t consider a folder rename. By doing that, what happens on the Veeam side of things? Does Veeam see that chain tied to that folder, and if I rename it, it will fail because it can’t find a folder by that name, but by running a new full, it won’t matter? That now makes me think, what if I did “remove from configuration” on the current bac
I have updated one of my biggest environment. It works really great. Waiting for the first health checks to run. They should be significantly use lower time to complete as before. This is also what I'm waiting for. In some environments the health checks take too much time and cause other jobs to wait/fail; we should see some improvement there. This, merges, and compact operations were taking significant amounts of time (days) on some of my servers and causing issues with my copyjobs. That was one of my main reasons for moving forward with the upgrade.
I am excited about Microsoft 365 backup. That is going to be a fantastic service to have available as we continue to migrate customers to Veeam. I was also excited to learn about the Disaster Recovery Orchestrator and Veeam lab. I want to leverage that. It’s a shame that Disaster Recovery Orchestrator isn’t compatible with Cloud Connect as I see huge potential in partnering service providers that see and face these problems daily being able to help tailor the customer’s DR experience to what they truly need. That’s a good point. That should definitely be brought up on the R&D forum, for sure. I think that would definitely be something they’d be interested in adding. I am really hoping for a “force retention” or “run retention at the beginning of the backup” function to be added to VBR. That would be a life saver if you ever run out of space due to a miscalculation or a retention issue.
Login to the community
Log in with your Veeam account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.