VeeamON 2024 - Use Code "COMMUNITY10" for 10% Off!
Hi Dima,Definitely not any of the problems you’ve mentioned.The tapes are genuine Quantum ones fresh out the box. I have 8 tapes total in the pool, which are rotated so that they all see 1 weekly backup and 6 daily backups in an 8 week cycle. The backups have been running for about 14 months, so each one has only seen 7 or 8 cycles. It’s exactly the same on a brand new un-opened tape too. Cleaning is done at the end of an 8 week cycle, well before the drive requests it. The free space is reported correctly when the tapes are empty, so nothing to do with extra partitions etc.And not human error either, The tapes are LTO-6, so NATIVE capacity is 2.1TB.If you’d read my original post, you’d have realised most of this.So despite the “Solved” icon at the top of the screen, this issue most definitely is NOT solved.
Hi @Matt C , the standard blocksize is 256 Kilobyte, not Kilobit…. Darn it… I’ll get there eventually. Still, even allowing for that nowhere near the amount of wastage I saw.@BertrandFR - The number I have is 05731346I don’t expect a huge response, as I haven’t got a support contract, but appearing to waste over a terabyte of tape on a 1.1 terabyte backup (having almost eliminated block size and foobarred hardware compression issues) would warrant a bit of response.
Good news and bad news folks…The removal of the larger personal stuff (music, movies, digital photos) from the backup set has reduced it to around 300GB, so much more space on the tape for important stuff. The reduced backup on Friday night still not reporting the amount of free space correctly, so whatever the issue is, it’s still present.Bad news - Veeam have closed the support case I opened, without so much as a peep of assistance / enquiries, so doesn’t look like they’re bothered about this one.Just glad I haven’t forked out loads of dosh if this is typical support.Back to the tape block size… since its kbits rather than kbytes as I originally assumed, it would mean the blocks are 8 times smaller than the size used in my calculations above, so there should be much less space wastage due to that, however, as shown above, it’s nowhere near enough, even with the larger (incorrect) block size to cause an error of this magnitude. Thanks to everyone for their work on this one. I’m just a
Hi Joe,Thanks for the correction.I’ve opened a support case - In the mean time, I’ve archived some of my personal stuff to a LTO-7 LTFS tape, which should free up some space for more important business stuff!The joys of being a one person company!
According to the tape drive properties, the block size is 128K - Seems sensible.Done some analysis on the files, and I have the following resultsI have counted 121613 files with a size of 128K or less. If I assume these all take a 128K block, then it amounts to 13.5 GB of wasted space.For the large files (over 128K), I have counted 23422 files. If I assume each of these might take an extra 128K block in addition to what the actual file size is, to round up to a whole number of blocks on tape, I get another ~3GB of wasted space.So, worst case, with each “small” file taking one block, and an extra block added on to “large” files, there shouldn’t be any more than 20GB wasted. That’s less than 2% of the 1.1TB total, which is entirely acceptable.So, I don’t think it’s a block size issue. Even though there are a lot of small files, the numbers just don’t add up. Unless I’ve misinterpreted the 128K, and it’s actually 128M or something.
That could be it, Joe. I noted, whilst poking around, that it looks like 128K blocks on the tape. I'll have a look at file sizes within the set tomorrow, and see how it looks.Ome workaround would be a file to disk and then a disk to tape job. Hopefully it'd pack it in better...
There isn't one. It's a straight whatever-is-on-the-disk to tape backup. There are a couple of backups to a disk repository from my desktop and laptop via agents, but these are miniscule in comparison. All the important stuff is in SVN repositories, or on the server and used via a network share. The backup repository folder is included in the main D2T file set, so should still report in the transfer size.
OK, the backup without using HW compression in the tape drive has just completed.Still no change.~1.1TB transferred on the job log, 212.8GB showing free on a 2500GB tape. Lost over a TB of space somewhere...
Retention in the media pool is set to protect for 3 weeks, adding new media as required (not happened yet… still on the original 8 tapes I set up)The tape backup job is set to export the media set on completion on Thursday and Friday, thus after the last daily backup in the weeks sequence (Thursday), and after the full backup done on a Friday. Next media set starts with the daily backup on Saturday.I have 4 pairs of tapes, a daily and weekly in each pair, 3 pairs protected, and 1 “active”. I’ve verified (by looking in catalogues) that all the daily backups are present on the daily tape, so they are appended one after the other. At the end of 4 weeks, the tapes in each pair are swapped around, so in the long 8 week cycle, the tape used for week 1 full backup is then used for week 5 daily (and w1 daily used for w5 full) to ensure that each tape is used roughly the same amount in the long term. Looking at the numbers for transfer speed and time, it does seem to be transferring 1.1TB to th
Done some more poking (after having a break spray painting some stuff in the workshop) and the dailys aren’t overwitten - its just the way it appeared in the tree that confused me. Phew…I’ve checked the media sets, and it’s working as I expect, given the settings. I’ve also looked at the numbers, and the last full backup took 4:26:24, which at it’s reported average of 72MB/second, is 1,150,848 MB. Or 1.1TB, so the numbers make sense that it’s transferred 1.1TB to the tape drive.Is this, perhaps, a case of the hardware compression on the drive gone badly wrong? The data is a mix of personal and business stuff, and a lot of the personal stuff is digital copies of music and DVDs, so maybe the hardware compression algorithm isn’t working too well?
I had it set it to create a new media set at 12:00 on a Friday to close off the daily tape, and the backup job is set to export the set and eject the tape. The file set is showing up as expected in the tape properties, but one thing I’m not sure about is whether each daily shows up separately, or whether it groups them all together. I’ve a feeling that having the option set on the job and the tape library is causing the dailys to be overwritten, so only the most recent is included. Last week’s daily tape
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.