Solved

Free space on tape


Userlevel 2

Likely a noob question, but just want to be sure things aren’t broken.

I’ve got a setup with Veeam B&R Community Edition on a single windows machine. I have full backups scheduled weekly, and daily backups every day, to a Quantum LTO-7 drive, using LTO-6 media.

The console reports the total capacity of the tapes as 2.1TB. Allowing for the whole base 10 / base 2 argument, this is what I’d expect. 

The full backup (done last night) reports total transferred is around 1.1TB, but the tape is only reporting around 250GB free. I’m expecting around a TB free (2.1TB - 1.1TB). 

You can also see that the “daily” tapes are reporting 2.1TB free (again, this is expected - there’s not a lot written, maybe ~90MB per day, so not enough to make a difference to the free space reported)

Is there something amiss somewhere, or am I going to have to migrate to LTO-7 media a lot sooner than I thought?

 

 

icon

Best answer by Dima P. 28 November 2022, 21:32

View original

30 comments

Userlevel 2

Hi Dima,

Definitely not any of the problems you’ve mentioned.

The tapes are genuine Quantum ones fresh out the box. I have 8 tapes total in the pool, which are rotated so that they all see 1 weekly backup and 6 daily backups in an 8 week cycle. The backups have been running for about 14 months, so each one has only seen 7 or 8 cycles. It’s exactly the same on a brand new un-opened tape too. 

Cleaning is done at the end of an 8 week cycle, well before the drive requests it. 

The free space is reported correctly when the tapes are empty, so nothing to do with extra partitions etc.

And not human error either, The tapes are LTO-6, so NATIVE capacity is 2.1TB.

If you’d read my original post, you’d have realised most of this.

So despite the “Solved” icon at the top of the screen, this issue most definitely is NOT solved.

Userlevel 7
Badge +10

All hail reply from @Dima P. 

Hi guys!

The free space is reported by the drive based on End of Media attribute, so there is no mystery why B&R uses the set amount of storage. Simply because it was told by the tape > drive > library :)

  1. You may have additional partition created by any other software which ‘blocks’ full tape capacity. To deal with that you need to full erase the tape via vendor tools (i.e. reformat the tape)
  2. While the tape lifespan is decreasing the bad blocks are being recognized/noted by the drive and excluded from capacity. Could be the issue with either tape being old or drive being dirty. 
  3. Human error (I bet that’s not the case, but still): marketing capacity always doubles the actually tape free space due to the possible compression savings. B&R displays native capacity available on tape. Say for backup files compression takes no effect anyway as we compress data during backup to disk

Based on the description I’d go with cleaning / tape lifespan investigation.

Userlevel 7
Badge +8

Posting in the R and D forms might really help you. There are some smart Veeam employees that frequent the tape section and if you provide your case number they might give you some insight. 

Hey @Scott  → I’ve sent this thread to the tape PM; no need for duplicate posting.

My bad Rick. Didn’t see that. 

Userlevel 7
Badge +10

Posting in the R and D forms might really help you. There are some smart Veeam employees that frequent the tape section and if you provide your case number they might give you some insight. 

Hey @Scott  → I’ve sent this thread to the tape PM; no need for duplicate posting.

Userlevel 7
Badge +8

Posting in the R and D forms might really help you. There are some smart Veeam employees that frequent the tape section and if you provide your case number they might give you some insight. 

Userlevel 7
Badge +10

The case closing quickly is likely due to Community Edition (the free product) and something not “broken” but agree it's not making sense. On this site we can’t supersede case operations; but I understand why it wasn’t handled on free edition.

I actually didn’t know that Community Edition does job to tape, I thought it only did file to tape. I’ll share it internally to see what I can find out.

(Tab open for a long time and Rickatron returns...)

This is interesting:  HPE LTO-7 Ultrium on Veeam shows only 5Tb instead of 15 Case #05095184

-Are the tapes formatted a certain way/size?  (less likely)

-Is compression getting us on actualities here? (more likely)

Both are discussed in this (IMHO) similar thread over on R&D Forums. 

Userlevel 7
Badge +17

Support is on best-effort base in this case.

If there is a high amount of support calls from customers with support contract, Veeam support will close eventually calls from customers without support contract.

You can try to open a call again in some time. When then the amount of calls is not that high, they will work on it...

Userlevel 2

Hi @Matt C ,
the standard blocksize is 256 Kilobyte, not Kilobit….

Darn it… I’ll get there eventually. Still, even allowing for that nowhere near the amount of wastage I saw.

@BertrandFR - The number I have is 05731346

I don’t expect a huge response, as I haven’t got a support contract, but appearing to waste over a terabyte of tape on a 1.1 terabyte backup (having almost eliminated block size and foobarred hardware compression issues) would warrant a bit of response.

Userlevel 7
Badge +17

Hi @Matt C ,
the standard blocksize is 256 Kilobyte, not Kilobit….

Userlevel 7
Badge +8

Share your case number @Matt C , Product management Team probably can do something about it. @Rick Vanover @Hannes 

Userlevel 2

Good news and bad news folks…

The removal of the larger personal stuff (music, movies, digital photos) from the backup set has reduced it to around 300GB, so much more space on the tape for important stuff. The reduced backup on Friday night still not reporting the amount of free space correctly, so whatever the issue is, it’s still present.

Bad news - Veeam have closed the support case I opened, without so much as a peep of assistance / enquiries, so doesn’t look like they’re bothered about this one.

Just glad I haven’t forked out loads of dosh if this is typical support.

Back to the tape block size… since its kbits rather than kbytes as I originally assumed, it would mean the blocks are 8 times smaller than the size used in my calculations above, so there should be much less space wastage due to that, however, as shown above, it’s nowhere near enough, even with the larger (incorrect) block size to cause an error of this magnitude. 

Thanks to everyone for their work on this one. I’m just a bit sad it’s almost in vain, as Veeam don’t seem to want to pick up the baton and run with it. 

Userlevel 7
Badge +8

I use backup to tape and if I append on my weekly jobs, doing Per-VM backups, it takes VBK files and puts them where it pleases so I tend to not track this stuff, but it works and fills my tapes. It’s never been “wrong”

 

Perhaps with File-to-Tape I would be curious about block size as well if you were backing up a ton of really small files..    That being said, When I use file to tape to back up a 1.4TB share, it uses 1.4TB of my tapes usually so in most modern day files, having a larger block size is recommended for performance, and the fact that we aren't using 1kb files for everything. 

Userlevel 7
Badge +7

No, the blocksize unit is kb for LTO (largest possible is 512 kb if I remember correct). In most cases Veeam sets it to 256 kb.

Ok, in your case there are neither that much files nor are there that much really small files. I have seen clients with millions of 200 byte files…

So - as said yesterday - I would open a support case for this.

Yes, you are right @JMeixner Veeam sets the block size to 256kb. 

https://helpcenter.veeam.com/docs/backup/vsphere/tape_supported_devices.html?ver=110#data-block-size

Userlevel 7
Badge +17

👍🏼 Please let us know the results.

Userlevel 2

Hi Joe,

Thanks for the correction.

I’ve opened a support case - In the mean time, I’ve archived some of my personal stuff to a LTO-7 LTFS tape, which should free up some space for more important business stuff!

The joys of being a one person company!

Userlevel 7
Badge +17

No, the blocksize unit is kb for LTO (largest possible is 512 kb if I remember correct). In most cases Veeam sets it to 256 kb.

Ok, in your case there are neither that much files nor are there that much really small files. I have seen clients with millions of 200 byte files…

So - as said yesterday - I would open a support case for this.

Userlevel 2

According to the tape drive properties, the block size is 128K - Seems sensible.

Done some analysis on the files, and I have the following results

I have counted 121613 files with a size of 128K or less. If I assume these all take a 128K block, then it amounts to 13.5 GB of wasted space.

For the large files (over 128K), I have counted 23422 files. If I assume each of these might take an extra 128K block in addition to what the actual file size is, to round up to a whole number of blocks on tape, I get another ~3GB of wasted space.

So, worst case, with each “small” file taking one block, and an extra block added on to “large” files, there shouldn’t be any more than 20GB wasted. That’s less than 2% of the 1.1TB total, which is entirely acceptable.

So, I don’t think it’s a block size issue. Even though there are a lot of small files, the numbers just don’t add up. Unless I’ve misinterpreted the 128K, and it’s actually 128M or something.

 

Userlevel 7
Badge +17

That could be it, Joe. I noted, whilst poking around, that it looks like 128K blocks on the tape. I'll have a look at file sizes within the set tomorrow, and see how it looks.

Ome workaround would be a file to disk and then a disk to tape job. Hopefully it'd pack it in better...

This was why I asked about the retention of the disk repo, I believed you are doing a backup-to-tape job. With this you have some big files and the tape will be used more efficient.

Userlevel 2

That could be it, Joe. I noted, whilst poking around, that it looks like 128K blocks on the tape. I'll have a look at file sizes within the set tomorrow, and see how it looks.

Ome workaround would be a file to disk and then a disk to tape job. Hopefully it'd pack it in better...

Userlevel 7
Badge +17

Ok, so it is a file-to-tape job.

I don't know the tape format Veeam uses in depth, but perhaps a fixed block size is used and each small file uses one block? Then it would be possible that much more space on the tape is used than the amount that is shown on disk and the amount shown in the job overview….

Userlevel 2

There isn't one. It's a straight whatever-is-on-the-disk to tape backup. 

There are a couple of backups to a disk repository from my desktop and laptop via agents, but these are miniscule in comparison. All the important stuff is in SVN repositories, or on the server and used via a network share. 

The backup repository folder is included in the main D2T file set, so should still report in the transfer size.

Userlevel 7
Badge +17

OK, the backup without using HW compression in the tape drive has just completed.

Still no change.

~1.1TB transferred on the job log, 212.8GB showing free on a 2500GB tape. Lost over a TB of space somewhere...

Like I said above, Veeam data is compressed already. Tapedrive hardware compression cannot do any more compression to it (or very little).

 

You say that you have 4 weeks retention for your tape media pool. How long is the retention for the disk repository?

Userlevel 2

OK, the backup without using HW compression in the tape drive has just completed.

Still no change.

~1.1TB transferred on the job log, 212.8GB showing free on a 2500GB tape. Lost over a TB of space somewhere...

Userlevel 2

Retention in the media pool is set to protect for 3 weeks, adding new media as required (not happened yet… still on the original 8 tapes I set up)

The tape backup job is set to export the media set on completion on Thursday and Friday, thus after the last daily backup in the weeks sequence (Thursday), and after the full backup done on a Friday. 

Next media set starts with the daily backup on Saturday.

I have 4 pairs of tapes, a daily and weekly in each pair, 3 pairs protected, and 1 “active”. I’ve verified (by looking in catalogues) that all the daily backups are present on the daily tape, so they are appended one after the other. At the end of 4 weeks, the tapes in each pair are swapped around, so in the long 8 week cycle, the tape used for week 1 full backup is then used for week 5 daily (and w1 daily used for w5 full) to ensure that each tape is used roughly the same amount in the long term. 

Looking at the numbers for transfer speed and time, it does seem to be transferring 1.1TB to the tape drive. Since the only thing I can think of that *might* muck things up is the HW compression in the tape drive, I’ve switched it off. I’ve cleared the most recent full backup, and kicked it off again without HW compression. Should rule it in or out. 69% through now.

 

Comment