Replies posted by dloseke
As others noted, you’re using the REFS file system which uses block cloning. That is, in very simple/rudimentary terms, if you have a full backup, and then create another full backup, technically you’re going to have two backup files that are going to be very similar in size. Block cloning looks at the duplicate blocks in both files, and for those that are the same blocks, rather than having two copies of that block, it keeps only one block and each file references those blocks that they have in common. This will cause reporting far greater than the actual disk space consumed. If you were using the NTFS file system, the space reported would be accurate because you would have multiple copies of the same blocks. Note that if you were to ever move these files off of this disk to another partition/disk/volume to a NTFS volume or use a tool that is not aware of block cloning, your data is going to swell as the duplicate blocks are created on the new volume. For this reason, if you wer
Powershell is probably where you’re going to want to go on this. The PowerCLI can run jobs a little differently I believe if that’s what you’re looking for and then as noted above, once you get your command as you want, you should be able to set some task scheduler events to kick off each command as needed. The way you want to run backups are a bit niche, but if you think it’s something that others would see value in, can’t promise anything here, but you could post a feature request in the R&D Forums.
The downside of making things too secure is that your users will actively work against you to make things easier. Sure, be secure, but if it’s too secure or too inconvenient, they will work to make their lives easier. Good security doesn’t displace the end users, at least too much.
I believe that the 1 month GFS retention is going to be redundant because you’re already keeping you’re incremental’s for 30 days. GFS really shines when you’re keeping restore points for longer terms. For instance, my standard setup is to keep a month of incremental’s, but then keep 11 monthly’s, and then 1, 3 or 7 yearly’s etc. Note that retention, for most part part, is separate from immutability as retention is set in the backup or copy job, but the immutability is set at the repository. For instance, if you want data to be immutable for 30 days, and your retention policy is for 60 days, then the data can be deleted starting at 30 days when the immutable flag expires but should still stick around until the retention policy takes effect. On the converse, if you have a 30 day retention, but your immutability is set for 60 days, the files cannot be deleted until that immutability flag expires - in which case the files will be marked for deletion but will not be able to be deleted,
If you’re not using REFS (or XFS), VeeaMover is probably the best bet. That said, my experience is that it can be kind of slow, so it might be faster to perform a multi-threaded copy using RoboCopy or something like that. However as noted, it’s not really recommended if you can avoid it. Note that if you move the data outside of Veeam, you’ll want to rescan the repository after the data has been migrated so that it can be cataloged into the database, and then you’ll likely want to repoint your backup jobs to the new repo so that you can hopefully reattach and continue the existing backup chains.
Behind the scenes of VeeamON Resiliency SummitBehind the scenes of VeeamON Resiliency Summit
Rickatron point of view Emilee Tellez @ertelle1 point of view: Emilee says that her trick is to “do it live” every time. It makes it more natural vs scripted. True story, the session we are doing on the demos A-Z with the Veeam Data Platform, we didn’t practice together. It’s not organic if we over practice and script! I’m more in Emilee’s camp. I like to read through the material a few times and make sure I know the points, but my presentations are rarely rehearsed. They do tend to be more authentic that way, and I like to have more of a conversation that a presentation.Either way, looking forward to this!
How is Storage VMotion operation getting cancelled? I wonder if it has anything to do with the I/O filter on the Hosts? Anyway, this is an interesting question/dilemma. Have you reached out to Support? I’m not exactly sure...they’re pretty sizable so they take a while to move so I step away and a little later when I check in on them on I see the below. A bit more digging actually indicates there was an error and not a cancellation of the task. I’ll give Chris’ advice a shot.
I believe you need to recreate the CDP policy and I think you can seed things. I used this once for my book so may be off on some steps. I can take a look after and see but I removed the I/O filters to update to 8U2. 😂 I’ll give this a try. I know I have IO filters that need to be updated on the source hosts, but I’ve been putting that off. But I have updates and such all around that are needed in the environment, so I’ll have to bite the bullet at some point. I was just trying to get the new SAN installed first, and then get the VMware and Veeam updates (if any) done afterwords.
How is Storage VMotion operation getting cancelled? I wonder if it has anything to do with the I/O filter on the Hosts? Anyway, this is an interesting question/dilemma. Have you reached out to Support? I’m not exactly sure...they’re pretty sizable so they take a while to move so I step away and a little later when I check in on them on I see the below.
How to fix Veeam FLR error -Secondary GPT header LBA 209715199 exceeds the size of the disk (86401630720)
Thanks for sharing. I believe I have a server that has this issue but we had to pause backups as these were going direct to Wasabi and the client had a poor internet connection so I had to pause troubleshooting. In my case, I am attempting to get a good backup (vs restoring) and I found information that referenced shrinking the disk which I had tried, but I’ll try expanding instead and see what happens. I’m just picking backup on that project and was catching up on posts and found this - which I think may apply. Either way, thanks for posting.
Veeam 100 Show - First episode - How to leverage existing Veeam object storage backups for Disaster RecoveryVeeam 100 Show #1
This was a great show but froze towards the end for me. Thanks for posting this and I will rewatch it to see the demo. Froze for everyone I believe, and on multiple platforms. I was watching on LinkedIn and flipped to youtube and got the same. Glad to see there is a demo on the end of the posted video.
On the flip side you can still create a SOBR for your backup copy to offload to archive (not with wasabi of course unless you add a capacity tier or AWS S3 or Azure blob but you get the idea) I appreciate this insight. Alas, we don’t use an archive tier, so not an issue, but good to note for anyone who may find this post in the future. We went with a copy job and haven’t looked back.
Also, don’t forget to check the “Don’t show the results on the boards” checkbox. Sounds like a great way to publish weak websites if you ask me. Used that when I tested my blog. Got a B across the board so need to look at CloudFlare which I use with it. I just ran it on mine….aside from figuring out why my domain doesn’t like to pull up when not using a www. in front (it’s DNS of course), I’m happy with my result.
I use the Qualys scanner every time I update a certificate on a public site. Another tool that I use in conjunction with this is IISCrypto to disable/enable the appropriate SSL/TLS protocols, weed out weak ciphers and set cipher priorities without having to dig into the registry manually.https://www.nartac.com/Products/IISCrypto
How to fix Veeam unable to allocate processing resources issues. Error unable to find Hyper-V hosts where VM 'xxxx-xxxx-xxxx-xxxx-xxxxis registered
Back when I worked in hosting, it was fun how the linux team would poke at us Windows folks about rebooting. They talked about how stable their OS was, and stated that if they had issues on their side, it was typically due to poor coding of the application that was running on their box. And to be fair, I think it probably was, but I think that the same is true, perhaps to a lesser degree, but true nonetheless for Windows as well. Windows has gotten very stable overall, but there’s always something in there mucking things up, and more often than not, it feels like an app that isn’t cleaning up after itself, has memory leaks, etc.
Yeah, I’m trying to sort this out as well...gotta do some more reading on it quick but just saw the partner communication as well.We need a Slack for the entire Veeam 100 or something so that we can all be a part of this….or else Legends will have to rise up and deploy their own!
Login to the community
Log in with your Veeam account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.