VeeamON 2024 | #BounceForwardThrowback (Win Custom Shoes!)
ok time to let ChatGPT help out and tell me about reasons why the df commands reports higher used capacity compared to xfs_bmap reports on summarized unique data of all files. (hard lesson: ask ChatGPT for help MUCH earlier… :))here are some answers that were quite interesting. please note, this is direct chatgpt output, be carefull in taking it for truth, but i tend to think i learned some stuff and got some direction in to searching for a next step.I also asked it to write me some scripts for xfs_bmap. I suggest anyone should try the same and see what we come up with. ====== Metadata overhead: The df command reports the total used space, including both data blocks and metadata blocks, whereas xfs_bmap only reports on data blocks. This means that the df command will show a higher used capacity because it is including the overhead from metadata blocks. Unwritten extents: XFS uses a technique called "delayed allocation" to optimize disk performance. This means that when a file is writ
i'm looking into this as well. I'm not able to make the “used” output by “df -h” match the output of the script that uses xfs_bmap. It doesn’t even come close in some cases, the df -h output is always higher. we're currently suspecting xfs_bmap does not include some pre-allocated space for the backupfiles, done by xfs, where df -h would probably include this. next step is to figure out how to get it matching between the two to better understand.
Very nice!I was looking for the “why?” when comparing it to Powershell, but you explained that on your github page i see. Makes sense.
imho must haves for new calc:chain breakdown/visualization. This just makes the output way more understandable and in some extent verifiable some way to export a calculation. The URL export was very handy in RPS/RPCnice to have:immutable backups implications visualized and enforced when checked
The Apollos are not deployed, we are currently installing them with RHEL 8.2. It will take a couple of days until I’ll be able to test. just curious, why in your case did you choose RHEL 8.2 , instead of (e.g.) Ubuntu?
Yes i agree. That's why i'd argue 7 days of immutable storage as a standalone measure is not that valuable (due to the mentioned “wait” time), although beter than not having anything immutable. But the same could be said about having 31 days immutable as standalone measure, although increasing the chance to beat “wait times”, but then there are other “concerns” when having to use that old data for restore when compromised. I definetly would agree there's no answer like “28 days is best in most cases”. It depends on a lot. But, the correct questions to ask to see what period does fit best, is an interesting part of the discussion.
Great answers! Great points.There are a lot of things to care about regarding protection for the scenario arround data compromise. But i'm still struggling to come up with proper immutable retention times with proper arguments. For example, when i try to lookup some consensus about avarage compromise detection times, this could lead to some arguments to provide for at least 14 - 31 days immutable retention.For example: let's say an organization adds an Veeam immutable hardened repository to his existing Veeam environment, no other plans other than that. There was no such thing before in this environment. Sizing comes up. What sizing procedure would make sense and what arguments whould make sense to suggest an immutable retention period, only caring about this immutable repository. Would 7 days be too short according to the proper arguments? I'm think longer immutable retention time (as a standalone measure) to protect against compromise are less of value than one might think. If compro
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.