Skip to main content

The roadmap for LTO tape has been extended up to generation 14 which is projected to store up to 576TB of uncompressed data and a unbelievable amount of 1440TB of compressed data on one tape.

It will be interesting to see if this can be realized and the capacity be doubled with each of the coming generations...

See more here:

https://www.lto.org/2022/09/lto-program-announces-extension-to-the-lto-tape-technology-roadmap-to-generation-14/

Ā 

Yes, all ok, I am with you. My statement was about a SAN with all 8 or 16 or whatever connections. šŸ˜Ž
Mixing isnā€™t a good idea at all...

In such an environment it is no problem to have several connections to tape drives on on server connection...


Iā€™ll add, slower storage and faster hosts usually isnā€™t as much of an issues as faster storage and slower hosts. Especially if your zoning is good and you are not using ISLā€™s.

Ā 

I often see people go out and buy an all flash SAN with 32 gig ports and connecting servers with 8 gig to it though. That can cause an issue. Here is 2 vids explaining.Ā  Itā€™s pretty dry so grab a coffee. haha

https://mediacenter.ibm.com/media/Mixing+Fibre-Channel+Speeds+on+the+Same+Fabric/1_6ecwy6ij/172212232

Ā 

Ā 


Mhh, no need for the fibre connection to one tape drive.

But in normal cases several tape sessions run over one fibre connection of a server. So, you could be in need of faster connections thereā€¦.

Right, but that isnā€™t limited right now.Ā  I run 32 and 16 for most of my servers currently. I have 16ā€™s in my tape proxy servers.Ā Ā 

Ā 

BertrandFR said ā€œ. I hope drives connectivities will be increase more than 8gb/s in new generation.ā€

I personally donā€™t think it will be required until 15, but it might get implemented in 13\14 due to fiber having a negative effect if you are mixing speeds more than 2 generations apart.Ā  Ā Slapping a bunch ofĀ  8 gig LTO tape drives to 64gb fiber in the servers is not best practice.Ā 

Ā 

Ā 


Mhh, no need for the fibre connection to one tape drive.

But in normal cases several tape sessions run over one fibre connection of a server. So, you could be in need of faster connections thereā€¦.

And with disk storage with fibre connections this a complete different case, tooā€¦.


canā€™t wait the LTO14! I just hope that the humidity conditions will not drop like with LTO9. more capacity but more sensitive!

I am convinced LTO development is pushed by hyperscaler (thx aws...) 🙂. we can consider that the majority of the data is cold so the power consumption is reduced with the use of tape.

I have quantum librairies (I6000) that are 15 years old and still updated with news drives etc, quite a profitable investment. I hope drives connectivities will be increase more than 8gb/s in new generation.

I love new object storage solutions with direct to tape or tiering.Ā  Welcome replication between sites and performances with tiering!

With LTO10 being about 1100MB/s itā€™s tough to say if there will even be a requirement to go above 8 gig. Based on averages of previous gens, LTO13 will be about 3300MB/s, which means 14 could end up between 3300-5500.Ā  Ā  still no requirement for 16 gig fiber but Iā€™d assume they switch by then as 8 will be obsolete for most devices.Ā  Ā I really donā€™t like toĀ  have more than 2 different speeds in my fabric if possible but it seems tape is that last 8 gig device kicking around :)

Ā 


canā€™t wait the LTO14! I just hope that the humidity conditions will not drop like with LTO9. more capacity but more sensitive!

I am convinced LTO development is pushed by hyperscaler (thx aws...) 🙂. we can consider that the majority of the data is cold so the power consumption is reduced with the use of tape.

I have quantum librairies (I6000) that are 15 years old and still updated with news drives etc, quite a profitable investment. I hope drives connectivities will be increase more than 8gb/s in new generation.

I love new object storage solutions with direct to tape or tiering.Ā  Welcome replication between sites and performances with tiering!


offsite tapes are all well and good ā€¦..but how many people have either a) tested restores or b) actually had to restore in anger, 2-3 years down the line?

Ā 

I have a client that I deployed tape to this summer.Ā  Next week Iā€™m going to be replacing their SAN (and upgraded their NASā€™s that are used as Veeam Repoā€™s), but before we pull out the old SAN, they had this crazy/not so crazy idea to do a full restore of all VMā€™s from the tape to the old SAN to verify all is well.Ā Ā 

And how did the restore go? flawless?

Don't know yet.Ā  SAN is getting installed in a couple hours.


offsite tapes are all well and good ā€¦..but how many people have either a) tested restores or b) actually had to restore in anger, 2-3 years down the line?

Ā 

I have a client that I deployed tape to this summer.Ā  Next week Iā€™m going to be replacing their SAN (and upgraded their NASā€™s that are used as Veeam Repoā€™s), but before we pull out the old SAN, they had this crazy/not so crazy idea to do a full restore of all VMā€™s from the tape to the old SAN to verify all is well.Ā Ā 

That is a crazy/not so crazy idea I may have to try.

Ā 

Iā€™ve made and presented some educated guesses with testing about how long a REAL DR situation will take us to get functional, semi functional and fully back to normal.Ā  Between SRM, Veeam, restores and different scenarios wither itā€™s site down, ransomware etc.Ā 

Ā 

I often keep our old SANā€™s for temp space, landing areas for things, ā€œUnsupportedā€ risky areas for people to stack things up in testing and labs etc.Ā  Iā€™ve even gone as far as running backups on these unsupported areas, but warn them that things can go south in a hurry if disks or controllers fail and at your own risk.

Ā 

What I haven'tĀ done is a full tape restore of our production environment and timed it.Ā  Ā The restore alone will take quite a bit of time, and that doesnā€™t confirm anything is going to work when booted (apps talking to DCā€™s, talking to DBā€™s etc) but you could verify the tape jobs and have a time frame.Ā 

Ā 

Itā€™s a good way to test the load on your servers and make sure your Veeam, SAN, NW and fiber infrastructure can handle it as well as that is a ton of data.Ā  When I started at a previous job they did SRM ā€œTestsā€ all the time and they passed with ease but never had to use it. One day I set up a test VM and volume with SAN replication like they had. Created a protection group and figured Iā€™d flip the VM to the other side and it failed so bad it broke a few things. It ended up being wonky networking that wouldnā€™t show up in the testing. Iā€™d rather know this BEFORE being in a critical situation and spending the time fixing issues that didnā€™t need to be there.Ā 


offsite tapes are all well and good ā€¦..but how many people have either a) tested restores or b) actually had to restore in anger, 2-3 years down the line?

Ā 

I have a client that I deployed tape to this summer.Ā  Next week Iā€™m going to be replacing their SAN (and upgraded their NASā€™s that are used as Veeam Repoā€™s), but before we pull out the old SAN, they had this crazy/not so crazy idea to do a full restore of all VMā€™s from the tape to the old SAN to verify all is well.Ā Ā 

And how did the restore go? flawless?


While at an IBM facility I saw a product only 3 customers in the world had. It was a chute that would allow tapes to go from one library to another over top of the isle.Ā  Ā I guess not too many people pushed for 40 frame systems to make it become more popular.Ā  Ā lol

Ā 

I just got rid of a 3500 was a very solid machine.Ā 

We had a look at this chute or bridge (don't remember the name of it) for this customer. But we decided against it, because the transport of the tape via this thing was rather slow.

But it was an interesting ideaā€¦ šŸ˜€


Nice, how much tapes can it manage?

Ā 

Ā 

For my Qualstar, reading the current specs, it looks like it would hold about 1700 tapes between the main unit and the MEM add-on unit.Ā  I donā€™t think we every had it completely full as some tapes were always stored offsite in a secured location, but to say it was a lot was an understatement to me.Ā  When I decommissioned it, we had issues with the robot being out of alignment again so we just had it unlock the door and manually removed the tapes.Ā  We took one of those plastic rubbermaid carts and stacked them all up on top of it.Ā  I think we removed 500-600 tapes from it which made for quite the heavy cart.


While at an IBM facility I saw a product only 3 customers in the world had. It was a chute that would allow tapes to go from one library to another over top of the isle.Ā  Ā I guess not too many people pushed for 40 frame systems to make it become more popular.Ā  Ā lol

Ā 

I just got rid of a 3500 was a very solid machine.Ā 


Yes, I like the TS4500 and TS3500, too. šŸ˜Ž

Had 7 of them with 6 to 12 Frames and in sum nearly 300 Jaguar drives (no LTO), 2 roboter arms each and something around 15000 tapes at one customer. There were no HD frames at this timeā€¦ Today there would be several frames less.


My libraries have 8 LTO8 drives in each currently.Ā  Iā€™ll most likely be at 12 per in the next short while.

They are beasts but the price tag wasnā€™t too bad for the pair.Ā  Itā€™s all relative when you need PBā€™s of backup space.

Iā€™ve been a fan of the IBM TS4500 libraries.Ā  you can have 12 Drives in frame one and 16 in each additional frame (up to 128 per library)

I think if you went 18 frames you are a max of 128 lto8 drives and 23k tapes or something crazy. Itā€™s like 700PB compressed.Ā  Even 660 LTO8 Tapes in a single frame with 12 drives is a monster backup system though. Iā€™m happy my SANā€™s are great and my FC switches allow me the throughput to really push this all as those tapes will hit their max data rates.Ā 

Ā 

In a previous life I was an IBM SSR. I replaced and repaired these things all the time. They seem complex and that robot gets a workout, but itā€™s actually all pretty straight forward, and knock on wood other than the odd gripper replacementĀ here and there, they are super durable.Ā 

Ā 

Ā 

Ā 

Ā 


Nice, how much tapes can it manage?

Ā 


I have two lto8 libā€™s running full bore about 24h a day.Ā  Ā This excites me but my gosh, that is a LOT of data.Ā 

Ā 

Data streams will need to be added or sped up significantly. My biggest gripe with lto8 is when you have a VM that spans a few tapes how long a single backup or restore gets. Iā€™d much rather write it to several tapes at once.Ā 

Ā 

Curious when you talk about having a library, how many drives youā€™re talking about?Ā  I used to manage a Qualstar libarary in a previous role.Ā  It consisted of the library unit with...hard to remember, but I think 8 drives.Ā  I want to say that they were something like LTO5 and LTO6 (could have been LTO4 and LTO5).Ā  It had a turnstile on one side.Ā  We were using Quest NetVault to stageĀ backup dataĀ to a SAN and and then it would write off the data from the SAN to the FiberChannel drives in the Qualstar.Ā  Quite the beast.Ā  It eventually got replaced by Dell Avamar/DataDomain and I decommissioned the tape libarary.Ā  My understanding is that the library was something like $1 million when it was purchased.Ā  Iā€™ll have to dig up some pictures if I can find them, but hereā€™s a couple stock photoā€™s of what it looked like roughly.Ā  It was very cool for itā€™s time, but was a beast to calibrate if the robot got misaligned with the drives and tape slots.

Ā 

Ā 


I was talking about smaller shops. Where they could have the option to go from lto8 to lto10 rather than adding frames.Ā Ā 

Ā 

LTOĀ 10 isnā€™t even going to be available until 2024 provided there are no shortages of tapes again.Ā  LTO9 isnā€™t worth an upgrade Ā unless you are coming from 6 or 7.Ā Ā 

Ā 

Data will increase forever, and in places like mine people want to keep it forever.Ā  The backup windows get long and things cost more money.Ā 

Ā 


šŸ˜‚šŸ˜‚šŸ˜‚ I am afraid we will see multiple frame libraries with extremely dense tapes. The amount of data will increase further and further...


Data streams will need to be added or sped up significantly.

Yes, I agree.
I know about the planned transfer rate for generation 10 only and it is planned to be increased from 400 MB/sec with Gen9 to 1100 MB/sec with Gen10. So, the transfer rate will increase definitely (BTW: Gen1 had 20MB/sec...)

Ā 

My biggest gripe with lto8 is when you have a VM that spans a few tapes how long a single backup or restore gets. Iā€™d much rather write it to several tapes at once.Ā 

If you keep your backup chains short not that much tapes should be used for a single VMā€¦. And for backup-to-tape you can use more than one tape in parallelā€¦.

Ā 

Ā 

I do use multiple tapes at once if the job is backing up several VMā€™s.Ā  Ā What happens is if a person has lets say 1 monster VM of 80TB, the other jobs will finish and then it will run on one tape for a few days.Ā  While the Speed of LTO8 is quite good,Ā  the data sizes keep increasing to a point that itā€™s going to take days/weeks to restore VMā€™s.Ā 

Ā 

That 576TB or 1440TB compressed off one sequential tape is going to be something.Ā  Ā you need a 1.4PB landing/staging area for it haha.

Ā 

The fiber infrastructure alone is going to get pretty expensive to have to keep up with that. Either way Iā€™m excited for it.

Yes šŸ˜Ž LTO-8 is 5 years old now. Data rate and tape size of LTO-8 does not keep up with the data amount increase we are seeing in the last years.

LTO-14 is at least 10 years in the future. Until then a 1,5 PB staging area is probably a jokeā€¦Ā šŸ˜Ž
What had you said in the year 2000 (when LTO-1 was new) to the needed 18 - 45 TB staging area for LTO-9? It was just not imaginableā€¦. At this time the 100 - 200 GB of LTO-1 were hugeā€¦.

Ā 

True, but LTO-9 is current and still could be faster /larger.Ā  I have VMā€™s that wouldĀ already fill a LTO10Ā  or 11 Tape.Ā 


I agree 1.5 PB staging area isnā€™t reasonable for most, but as someone who has multiple PB of on prem storage itā€™s not as far off as you think, and not really a joke.Ā  I have 200TB of SSD staging right now from an old decommed SAN I just decided to leave for tape restores and other things where I need a landing area.Ā  Ā  Itā€™s off maint so it doesnā€™t cost me anything and if it dies I never put production / non redundant data to it.

Ā 

At this point the landscape is changed vastly since 2000.Ā  1.4PB tapes DO seem reasonable and imaginable.Ā  The question is can technology keep up with demand.Ā  If not we will end up going back to multiple frame libraries instead of more dense tapes.Ā 

Ā 

Ā 


Data streams will need to be added or sped up significantly.

Yes, I agree.
I know about the planned transfer rate for generation 10 only and it is planned to be increased from 400 MB/sec with Gen9 to 1100 MB/sec with Gen10. So, the transfer rate will increase definitely (BTW: Gen1 had 20MB/sec...)

Ā 

My biggest gripe with lto8 is when you have a VM that spans a few tapes how long a single backup or restore gets. Iā€™d much rather write it to several tapes at once.Ā 

If you keep your backup chains short not that much tapes should be used for a single VMā€¦. And for backup-to-tape you can use more than one tape in parallelā€¦.

Ā 

Ā 

I do use multiple tapes at once if the job is backing up several VMā€™s.Ā  Ā What happens is if a person has lets say 1 monster VM of 80TB, the other jobs will finish and then it will run on one tape for a few days.Ā  While the Speed of LTO8 is quite good,Ā  the data sizes keep increasing to a point that itā€™s going to take days/weeks to restore VMā€™s.Ā 

Ā 

That 576TB or 1440TB compressed off one sequential tape is going to be something.Ā  Ā you need a 1.4PB landing/staging area for it haha.

Ā 

The fiber infrastructure alone is going to get pretty expensive to have to keep up with that. Either way Iā€™m excited for it.Ā 

Yes šŸ˜Ž LTO-8 is 5 years old now. Data rate and tape size of LTO-8 do not keep up with the data amount increase we are seeing at the moment.

LTO-14 is at least 10 years in the future. Until then a 1,5 PB staging area is probably a jokeā€¦Ā šŸ˜Ž
What had you said in the year 2000 (when LTO-1 was new) to the needed 18 - 45 TB staging area for LTO-9? It was just not imaginableā€¦. At this time the 100 - 200 GB of LTO-1 were hugeā€¦.


Data streams will need to be added or sped up significantly.

Yes, I agree.
I know about the planned transfer rate for generation 10 only and it is planned to be increased from 400 MB/sec with Gen9 to 1100 MB/sec with Gen10. So, the transfer rate will increase definitely (BTW: Gen1 had 20MB/sec...)

Ā 

My biggest gripe with lto8 is when you have a VM that spans a few tapes how long a single backup or restore gets. Iā€™d much rather write it to several tapes at once.Ā 

If you keep your backup chains short not that much tapes should be used for a single VMā€¦. And for backup-to-tape you can use more than one tape in parallelā€¦.

Ā 

Ā 

I do use multiple tapes at once if the job is backing up several VMā€™s.Ā  Ā What happens is if a person has lets say 1 monster VM of 80TB, the other jobs will finish and then it will run on one tape for a few days.Ā  While the Speed of LTO8 is quite good,Ā  the data sizes keep increasing to a point that itā€™s going to take days/weeks to restore VMā€™s.Ā 

Ā 

That 576TB or 1440TB compressed off one sequential tape is going to be something.Ā  Ā you need a 1.4PB landing/staging area for it haha.

Ā 

The fiber infrastructure alone is going to get pretty expensive to have to keep up with that. Either way Iā€™m excited for it.Ā 


Data streams will need to be added or sped up significantly.

Yes, I agree.
I know about the planned transfer rate for generation 10 only and it is planned to be increased from 400 MB/sec with Gen9 to 1100 MB/sec with Gen10. So, the transfer rate will increase definitely (BTW: Gen1 had 20MB/sec...)

Ā 

My biggest gripe with lto8 is when you have a VM that spans a few tapes how long a single backup or restore gets. Iā€™d much rather write it to several tapes at once.Ā 

If you keep your backup chains short not that much tapes should be used for a single VMā€¦. And for backup-to-tape you can use more than one tape in parallelā€¦.


I have two lto8 libā€™s running full bore about 24h a day.Ā  Ā This excites me but my gosh, that is a LOT of data.Ā 

Ā 

Data streams will need to be added or sped up significantly. My biggest gripe with lto8 is when you have a VM that spans a few tapes how long a single backup or restore gets. Iā€™d much rather write it to several tapes at once.Ā 


There is a new article about LTO tape on BlocksAndFiles with some information why the capacity was decreased for LTO Gen 9.

ā€œA person close to LTO said the truth about the LTO-9 capacity reduction ā€œwas just to keep the retrocompatibility.ā€ The explanation is this: ā€œIt was 100 percent due to technical issues, mainly the track control system.ā€

If we want to add data tracks (like lines on a vinyl disk), we need to improve the data track control system (which is in charge of keeping the head ā€œin lineā€) and thus, decrease or even kill the retrocompatibility as the head will not be able to follow two different track control systems. Thus, thereā€™s a choice; less capacity and keep backward compatibility or more capacity and lose it. For LTO-9, IBM chose to reduce the capacity and keep the retrocompatibility..ā€

https://blocksandfiles.com/2022/09/07/lto-tape-future/


Geez...I deployed a Gen7 drive using Gen6 tapes earlier this year.Ā  I donā€™t have clients with anything NEAR gen 14.Ā  My client with the largest dataset is going to be near the 10/11 area, but theyā€™re not needing to do tape.

Wait the 10 years until we are at Gen14ā€¦

10 years ago the data amount we have today was not imaginable, tooā€¦.


Comment