Skip to main content

Likely a noob question, but just want to be sure things aren’t broken.

I’ve got a setup with Veeam B&R Community Edition on a single windows machine. I have full backups scheduled weekly, and daily backups every day, to a Quantum LTO-7 drive, using LTO-6 media.

The console reports the total capacity of the tapes as 2.1TB. Allowing for the whole base 10 / base 2 argument, this is what I’d expect. 

The full backup (done last night) reports total transferred is around 1.1TB, but the tape is only reporting around 250GB free. I’m expecting around a TB free (2.1TB - 1.1TB). 

You can also see that the “daily” tapes are reporting 2.1TB free (again, this is expected - there’s not a lot written, maybe ~90MB per day, so not enough to make a difference to the free space reported)

Is there something amiss somewhere, or am I going to have to migrate to LTO-7 media a lot sooner than I thought?

 

 

Hello @Matt C 
Yes it’s a bit strange. You could check which data is on your tape. Right click on a tape, properties and check if the files on it correspond to your selection

 

Also are you export or eject your tape after the job ending, cause following how you have set your media set the data could be just added to the tape, if you have selecteced the option below.
 


 

 


I had it set it to create a new media set at 12:00 on a Friday to close off the daily tape, and the backup job is set to export the set and eject the tape. 

The file set is showing up as expected in the tape properties, but one thing I’m not sure about is whether each daily shows up separately, or whether it groups them all together. 

I’ve a feeling that having the option set on the job and the tape library is causing the dailys to be overwritten, so only the most recent is included. 

Last week’s daily tape 

 


Done some more poking (after having a break spray painting some stuff in the workshop) and the dailys aren’t overwitten - its just the way it appeared in the tree that confused me. Phew…

I’ve checked the media sets, and it’s working as I expect, given the settings. 

I’ve also looked at the numbers, and the last full backup took 4:26:24, which at it’s reported average of 72MB/second, is 1,150,848 MB. Or 1.1TB, so the numbers make sense that it’s transferred 1.1TB to the tape drive.

Is this, perhaps, a case of the hardware compression on the drive gone badly wrong? 

The data is a mix of personal and business stuff, and a lot of the personal stuff is digital copies of music and DVDs, so maybe the hardware compression algorithm isn’t working too well?

 


Is this, perhaps, a case of the hardware compression on the drive gone badly wrong? 

The data is a mix of personal and business stuff, and a lot of the personal stuff is digital copies of music and DVDs, so maybe the hardware compression algorithm isn’t working too well?

 

The data should be compressed by the backup to disk job already, the hardware compression of the drive cannot do much more…

 

This could be caused by different retention on disk and tape, especially when you are moving the tapes offsite.

But I cannot get the problem completely. When the full backup job would copy some more files because of retention, then this should be shown in the job overview.

I think it is the best if you open a support call for this. Support can analyze the situation in depth.


Just throwing this out there. What do your Tape backup settings look like? Do you have the option to ‘Append to Tapes’ enabled?


Retention in the media pool is set to protect for 3 weeks, adding new media as required (not happened yet… still on the original 8 tapes I set up)

The tape backup job is set to export the media set on completion on Thursday and Friday, thus after the last daily backup in the weeks sequence (Thursday), and after the full backup done on a Friday. 

Next media set starts with the daily backup on Saturday.

I have 4 pairs of tapes, a daily and weekly in each pair, 3 pairs protected, and 1 “active”. I’ve verified (by looking in catalogues) that all the daily backups are present on the daily tape, so they are appended one after the other. At the end of 4 weeks, the tapes in each pair are swapped around, so in the long 8 week cycle, the tape used for week 1 full backup is then used for week 5 daily (and w1 daily used for w5 full) to ensure that each tape is used roughly the same amount in the long term. 

Looking at the numbers for transfer speed and time, it does seem to be transferring 1.1TB to the tape drive. Since the only thing I can think of that *might* muck things up is the HW compression in the tape drive, I’ve switched it off. I’ve cleared the most recent full backup, and kicked it off again without HW compression. Should rule it in or out. 69% through now.

 


OK, the backup without using HW compression in the tape drive has just completed.

Still no change.

~1.1TB transferred on the job log, 212.8GB showing free on a 2500GB tape. Lost over a TB of space somewhere...


OK, the backup without using HW compression in the tape drive has just completed.

Still no change.

~1.1TB transferred on the job log, 212.8GB showing free on a 2500GB tape. Lost over a TB of space somewhere...

Like I said above, Veeam data is compressed already. Tapedrive hardware compression cannot do any more compression to it (or very little).

 

You say that you have 4 weeks retention for your tape media pool. How long is the retention for the disk repository?


There isn't one. It's a straight whatever-is-on-the-disk to tape backup. 

There are a couple of backups to a disk repository from my desktop and laptop via agents, but these are miniscule in comparison. All the important stuff is in SVN repositories, or on the server and used via a network share. 

The backup repository folder is included in the main D2T file set, so should still report in the transfer size.


Ok, so it is a file-to-tape job.

I don't know the tape format Veeam uses in depth, but perhaps a fixed block size is used and each small file uses one block? Then it would be possible that much more space on the tape is used than the amount that is shown on disk and the amount shown in the job overview….


That could be it, Joe. I noted, whilst poking around, that it looks like 128K blocks on the tape. I'll have a look at file sizes within the set tomorrow, and see how it looks.

Ome workaround would be a file to disk and then a disk to tape job. Hopefully it'd pack it in better...


That could be it, Joe. I noted, whilst poking around, that it looks like 128K blocks on the tape. I'll have a look at file sizes within the set tomorrow, and see how it looks.

Ome workaround would be a file to disk and then a disk to tape job. Hopefully it'd pack it in better...

This was why I asked about the retention of the disk repo, I believed you are doing a backup-to-tape job. With this you have some big files and the tape will be used more efficient.


According to the tape drive properties, the block size is 128K - Seems sensible.

Done some analysis on the files, and I have the following results

I have counted 121613 files with a size of 128K or less. If I assume these all take a 128K block, then it amounts to 13.5 GB of wasted space.

For the large files (over 128K), I have counted 23422 files. If I assume each of these might take an extra 128K block in addition to what the actual file size is, to round up to a whole number of blocks on tape, I get another ~3GB of wasted space.

So, worst case, with each “small” file taking one block, and an extra block added on to “large” files, there shouldn’t be any more than 20GB wasted. That’s less than 2% of the 1.1TB total, which is entirely acceptable.

So, I don’t think it’s a block size issue. Even though there are a lot of small files, the numbers just don’t add up. Unless I’ve misinterpreted the 128K, and it’s actually 128M or something.

 


No, the blocksize unit is kb for LTO (largest possible is 512 kb if I remember correct). In most cases Veeam sets it to 256 kb.

Ok, in your case there are neither that much files nor are there that much really small files. I have seen clients with millions of 200 byte files…

So - as said yesterday - I would open a support case for this.


Hi Joe,

Thanks for the correction.

I’ve opened a support case - In the mean time, I’ve archived some of my personal stuff to a LTO-7 LTFS tape, which should free up some space for more important business stuff!

The joys of being a one person company!


👍🏼 Please let us know the results.


No, the blocksize unit is kb for LTO (largest possible is 512 kb if I remember correct). In most cases Veeam sets it to 256 kb.

Ok, in your case there are neither that much files nor are there that much really small files. I have seen clients with millions of 200 byte files…

So - as said yesterday - I would open a support case for this.

Yes, you are right @JMeixner Veeam sets the block size to 256kb. 

https://helpcenter.veeam.com/docs/backup/vsphere/tape_supported_devices.html?ver=110#data-block-size


I use backup to tape and if I append on my weekly jobs, doing Per-VM backups, it takes VBK files and puts them where it pleases so I tend to not track this stuff, but it works and fills my tapes. It’s never been “wrong”

 

Perhaps with File-to-Tape I would be curious about block size as well if you were backing up a ton of really small files..    That being said, When I use file to tape to back up a 1.4TB share, it uses 1.4TB of my tapes usually so in most modern day files, having a larger block size is recommended for performance, and the fact that we aren't using 1kb files for everything. 


Good news and bad news folks…

The removal of the larger personal stuff (music, movies, digital photos) from the backup set has reduced it to around 300GB, so much more space on the tape for important stuff. The reduced backup on Friday night still not reporting the amount of free space correctly, so whatever the issue is, it’s still present.

Bad news - Veeam have closed the support case I opened, without so much as a peep of assistance / enquiries, so doesn’t look like they’re bothered about this one.

Just glad I haven’t forked out loads of dosh if this is typical support.

Back to the tape block size… since its kbits rather than kbytes as I originally assumed, it would mean the blocks are 8 times smaller than the size used in my calculations above, so there should be much less space wastage due to that, however, as shown above, it’s nowhere near enough, even with the larger (incorrect) block size to cause an error of this magnitude. 

Thanks to everyone for their work on this one. I’m just a bit sad it’s almost in vain, as Veeam don’t seem to want to pick up the baton and run with it. 


Share your case number @Matt C , Product management Team probably can do something about it. @Rick Vanover @Hannes 


Hi @Matt C ,
the standard blocksize is 256 Kilobyte, not Kilobit….


Hi @Matt C ,
the standard blocksize is 256 Kilobyte, not Kilobit….

Darn it… I’ll get there eventually. Still, even allowing for that nowhere near the amount of wastage I saw.

@BertrandFR - The number I have is 05731346

I don’t expect a huge response, as I haven’t got a support contract, but appearing to waste over a terabyte of tape on a 1.1 terabyte backup (having almost eliminated block size and foobarred hardware compression issues) would warrant a bit of response.


Support is on best-effort base in this case.

If there is a high amount of support calls from customers with support contract, Veeam support will close eventually calls from customers without support contract.

You can try to open a call again in some time. When then the amount of calls is not that high, they will work on it...


The case closing quickly is likely due to Community Edition (the free product) and something not “broken” but agree it's not making sense. On this site we can’t supersede case operations; but I understand why it wasn’t handled on free edition.

I actually didn’t know that Community Edition does job to tape, I thought it only did file to tape. I’ll share it internally to see what I can find out.

(Tab open for a long time and Rickatron returns...)

This is interesting:  HPE LTO-7 Ultrium on Veeam shows only 5Tb instead of 15 Case #05095184

-Are the tapes formatted a certain way/size?  (less likely)

-Is compression getting us on actualities here? (more likely)

Both are discussed in this (IMHO) similar thread over on R&D Forums. 


Posting in the R and D forms might really help you. There are some smart Veeam employees that frequent the tape section and if you provide your case number they might give you some insight. 


Comment