Replies posted by dloseke
If you have multiple backup jobs that run locally, so subsequent incremental backups, or even subsequent fulls, the next time the copy job to object runs, it’s going to grab any and all data required from the local repo to the object repo to obtain the required data set specified in your retention policy. As I recall, the local restore points won’t be deleted even if the retention policy would mandate to do so because the copy job is going to hold those until that data can be copied. I’m a little rough around this concept, but that should be the case I believe.The question I typically have though is why you’re copying periodically and only once a week. I’d want the latest data available in my object repo as soon as possible in most cases. Unless you have a very high change rate, your incremental changes are probably going to be relatively small if copy time and bandwidth constraints are a concern. I always feel like putting off to the end of the week or weekly or whatever are gene
I should also note that there were some questions in either /r/sysadmin or /r/msp a while back (I think it was /msp) of what you plan on using if the whole Broadcom acquisition of VMware causes VMware to tank, and Proxmox was certainly on the list as well as a few other options. Proxmox isn’t fully matured yet, but it certainly sounds like it has potential.
If you spend any time in in the r/homelab subreddit, you’ll finds Proxmox mentioned a lot. I haven’t played with it, but it sounds like it’s getting closer and closer to a good business-grade solution as I’m told it how has clustering capabilities. With that said, I HATE HATE HATE the name. As superficial as that is, it makes me not want to play with it. There have also been a couple requests for Proxmox support for Veeam, but the only option currently is to use the Veeam agents on the VM’s.
Backing up to S3 compatible object storage does not use traditional backup files and chains and the concept of incrementals is somewhat foreign. It keeps the blocks that are required in order to achieve the retention period set and assembles those blocks of data to retrieve the data required if you were to perform a restore from object. If a block of data is no longer needed to achieve the retention policy set, assuming that block is not flagged for immutability, the block is deleted. If the block is flagged for immutability, it’ll be marked for deletion but cannot be deleted until the immutable flag expires. This is what makes object storage so much more efficient than regular block storage - you don’t have to keep duplicate blocks of data across multiple full and incremental backup files.
If you need to compare sites here is a good link as I might think about moving in the new year - 10 Best Cheap WordPress Hosting Services (Nov 2023 Deals) (codeinwp.com) Excellent reference. I was really down between Bluehost and Dreamhost I think, and had I see those load time differences, I may have gone Dreamhost instead. NameCheap, Hostinger and SiteGround were also in the running but were eliminated for….reasons I guess. I don’t recall exactly what all of them were.
I will note that block size does matter. Myself and a couple others did some testing a while back of how much space is consumed with backing up to Wasabi or other object storage and utilize different block sizes. That WILL make a difference on how much space is consumed, but that’s not a change due to object vs block storage, SOBR vs Direct to object, etc. It just matters how large of blocks you’re using in that case. But it does sounds like that wasn’t the question, but I wanted to put it out there that block size does matter.
If you want to get crazy……there are options.Option 1: Setup the NAS as an ISCSI device and direct connect from the VBR repo server to the NAS as an ISCSI volume. This would allow you to use REFS or XFS filesystems for block-cloning. I don’t like running REFS on a NAS however because it likely is a software-based array with no batter-backed cache, and this can result in REFS filesystem corruption if some sort of issue such as a power blip where to occur where the server updates the file table stating a file has been written before the file is actually written causing a mismatch. There have been conversations with Gostev in the past as well and he doesn’t trust the software-based RAID volumes because there can be some performance tweaks that could introduce file corruption, errors, etc. I have a few folks using direct-attach with NAS’s, but it’s no longer my go-to option.Option 2: Setup the NAS as a NAS using the SMB/CIFS or NFS protocols. You miss out on multipathing as available
I’m hosting on Dreamhost on one of their shared starter plans. I don’t love it...it certainly sleeps or times out the web site pool at times so going to my site fresh after no traffic for a while can take a while for the initial site to load and then it’s responsive after that. But for what I paid, I guess it’s not too bad. I’m glad this conversation started as well….Dreamhost changed some stuff around and in all the rush of things I didn’t read the email and realized that my IP address changed a few weeks ago and I needed to update my DNS since I’m not using Dreamhosts’s DNS servers so my A record’s weren’t updated automatically. It’s always DNS….
Can you let us know the region/language settings on your machine, I notice the screenshot seems to format the time as 21.51 instead of 21:51 and I’m wondering if there’s an issue with a particular locale on Windows. If so Veeam support will likely already be aware, but you should raise a case with them otherwise. Keen eye there. I’m wondering if this could be causing an issue as well. Interesting that on his machine, it’s formatted 21:51 but on the server it’s 21.51. Best I can tell, Amazon S3 protocol requires time formats to be ISO 8601 compatible, and using a decimal to separate the hour and minute fields rather than a colon does not comply with ISO 8601. I suspect that if the time locale on the server is converted to a compatible format, this error may go away. The first link below references ceph, but this line in particular gives me pause. I suspect support would have him change to a standard time format, at least for testing purposes, but I assume as well that even if it
Assuming wordpress hosting, I have a backup plugin that’s uploading my blog backups to my Wasabi bucket. My assumption, though I haven’t tested, is that I could in theory restore the backup to a new host and be mostly migrated without much issue, but I haven’t tested the restoration process.
Don’t feel bad about posting late. I have a blog post pending that I started typing on the flight back, but I was so tired I kept falling asleep at the keyboard and keep waking up and having to backspace out whatever keys I feel asleep on, and work has been relentless in letting me get back up to speed. And in 2 days I leave to take the family to Disney World, so expect it to be a bit quiet. Anyhow, point is, we’re all busy, and I happy to see you enjoyed your first trip and were able to see so much. Great catching up with you again and looking forward to seeing you online and hopefully at the next VeeamON!
Great recap Chris. Hope you get to feeling better! Thanks Derek. Yeah, don’t feel too bad but sucks testing positive for Covid. Hopefully it will go faster this time then when I first caught it. 😂 I’ve had it twice now that I’m aware of….second time was much better than the first. It nearly killed my father-in-law the first time he had it, but the second time was more like a cold/flu to him. Hope the second time around goes well for you as well.
As others noted, you’re using the REFS file system which uses block cloning. That is, in very simple/rudimentary terms, if you have a full backup, and then create another full backup, technically you’re going to have two backup files that are going to be very similar in size. Block cloning looks at the duplicate blocks in both files, and for those that are the same blocks, rather than having two copies of that block, it keeps only one block and each file references those blocks that they have in common. This will cause reporting far greater than the actual disk space consumed. If you were using the NTFS file system, the space reported would be accurate because you would have multiple copies of the same blocks. Note that if you were to ever move these files off of this disk to another partition/disk/volume to a NTFS volume or use a tool that is not aware of block cloning, your data is going to swell as the duplicate blocks are created on the new volume. For this reason, if you wer
Powershell is probably where you’re going to want to go on this. The PowerCLI can run jobs a little differently I believe if that’s what you’re looking for and then as noted above, once you get your command as you want, you should be able to set some task scheduler events to kick off each command as needed. The way you want to run backups are a bit niche, but if you think it’s something that others would see value in, can’t promise anything here, but you could post a feature request in the R&D Forums.
Login to the community
Log in with your Veeam account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.