LTO roadmap has been extended up to Generation 14


Userlevel 7
Badge +17

The roadmap for LTO tape has been extended up to generation 14 which is projected to store up to 576TB of uncompressed data and a unbelievable amount of 1440TB of compressed data on one tape.

It will be interesting to see if this can be realized and the capacity be doubled with each of the coming generations...

See more here:

https://www.lto.org/2022/09/lto-program-announces-extension-to-the-lto-tape-technology-roadmap-to-generation-14/

 


39 comments

Userlevel 7
Badge +17

Yes, all ok, I am with you. My statement was about a SAN with all 8 or 16 or whatever connections. 😎
Mixing isn’t a good idea at all...

In such an environment it is no problem to have several connections to tape drives on on server connection...

Userlevel 7
Badge +8

I’ll add, slower storage and faster hosts usually isn’t as much of an issues as faster storage and slower hosts. Especially if your zoning is good and you are not using ISL’s.

 

I often see people go out and buy an all flash SAN with 32 gig ports and connecting servers with 8 gig to it though. That can cause an issue. Here is 2 vids explaining.  It’s pretty dry so grab a coffee. haha

https://mediacenter.ibm.com/media/Mixing+Fibre-Channel+Speeds+on+the+Same+Fabric/1_6ecwy6ij/172212232

 

 

Userlevel 7
Badge +8

Mhh, no need for the fibre connection to one tape drive.

But in normal cases several tape sessions run over one fibre connection of a server. So, you could be in need of faster connections there….

Right, but that isn’t limited right now.  I run 32 and 16 for most of my servers currently. I have 16’s in my tape proxy servers.  

 

BertrandFR said “. I hope drives connectivities will be increase more than 8gb/s in new generation.”

I personally don’t think it will be required until 15, but it might get implemented in 13\14 due to fiber having a negative effect if you are mixing speeds more than 2 generations apart.   Slapping a bunch of  8 gig LTO tape drives to 64gb fiber in the servers is not best practice. 

 

 

Userlevel 7
Badge +17

Mhh, no need for the fibre connection to one tape drive.

But in normal cases several tape sessions run over one fibre connection of a server. So, you could be in need of faster connections there….

And with disk storage with fibre connections this a complete different case, too….

Userlevel 7
Badge +8

can’t wait the LTO14! I just hope that the humidity conditions will not drop like with LTO9. more capacity but more sensitive!

I am convinced LTO development is pushed by hyperscaler (thx aws...) :). we can consider that the majority of the data is cold so the power consumption is reduced with the use of tape.

I have quantum librairies (I6000) that are 15 years old and still updated with news drives etc, quite a profitable investment. I hope drives connectivities will be increase more than 8gb/s in new generation.

I love new object storage solutions with direct to tape or tiering.  Welcome replication between sites and performances with tiering!

With LTO10 being about 1100MB/s it’s tough to say if there will even be a requirement to go above 8 gig. Based on averages of previous gens, LTO13 will be about 3300MB/s, which means 14 could end up between 3300-5500.    still no requirement for 16 gig fiber but I’d assume they switch by then as 8 will be obsolete for most devices.   I really don’t like to  have more than 2 different speeds in my fabric if possible but it seems tape is that last 8 gig device kicking around :)

 

Userlevel 7
Badge +8

can’t wait the LTO14! I just hope that the humidity conditions will not drop like with LTO9. more capacity but more sensitive!

I am convinced LTO development is pushed by hyperscaler (thx aws...) :). we can consider that the majority of the data is cold so the power consumption is reduced with the use of tape.

I have quantum librairies (I6000) that are 15 years old and still updated with news drives etc, quite a profitable investment. I hope drives connectivities will be increase more than 8gb/s in new generation.

I love new object storage solutions with direct to tape or tiering.  Welcome replication between sites and performances with tiering!

Userlevel 7
Badge +6

offsite tapes are all well and good …..but how many people have either a) tested restores or b) actually had to restore in anger, 2-3 years down the line?

 

I have a client that I deployed tape to this summer.  Next week I’m going to be replacing their SAN (and upgraded their NAS’s that are used as Veeam Repo’s), but before we pull out the old SAN, they had this crazy/not so crazy idea to do a full restore of all VM’s from the tape to the old SAN to verify all is well.  

And how did the restore go? flawless?

Don't know yet.  SAN is getting installed in a couple hours.

Userlevel 7
Badge +8

offsite tapes are all well and good …..but how many people have either a) tested restores or b) actually had to restore in anger, 2-3 years down the line?

 

I have a client that I deployed tape to this summer.  Next week I’m going to be replacing their SAN (and upgraded their NAS’s that are used as Veeam Repo’s), but before we pull out the old SAN, they had this crazy/not so crazy idea to do a full restore of all VM’s from the tape to the old SAN to verify all is well.  

That is a crazy/not so crazy idea I may have to try.

 

I’ve made and presented some educated guesses with testing about how long a REAL DR situation will take us to get functional, semi functional and fully back to normal.  Between SRM, Veeam, restores and different scenarios wither it’s site down, ransomware etc. 

 

I often keep our old SAN’s for temp space, landing areas for things, “Unsupported” risky areas for people to stack things up in testing and labs etc.  I’ve even gone as far as running backups on these unsupported areas, but warn them that things can go south in a hurry if disks or controllers fail and at your own risk.

 

What I haven't done is a full tape restore of our production environment and timed it.   The restore alone will take quite a bit of time, and that doesn’t confirm anything is going to work when booted (apps talking to DC’s, talking to DB’s etc) but you could verify the tape jobs and have a time frame. 

 

It’s a good way to test the load on your servers and make sure your Veeam, SAN, NW and fiber infrastructure can handle it as well as that is a ton of data.  When I started at a previous job they did SRM “Tests” all the time and they passed with ease but never had to use it. One day I set up a test VM and volume with SAN replication like they had. Created a protection group and figured I’d flip the VM to the other side and it failed so bad it broke a few things. It ended up being wonky networking that wouldn’t show up in the testing. I’d rather know this BEFORE being in a critical situation and spending the time fixing issues that didn’t need to be there. 

Userlevel 7
Badge +9

offsite tapes are all well and good …..but how many people have either a) tested restores or b) actually had to restore in anger, 2-3 years down the line?

 

I have a client that I deployed tape to this summer.  Next week I’m going to be replacing their SAN (and upgraded their NAS’s that are used as Veeam Repo’s), but before we pull out the old SAN, they had this crazy/not so crazy idea to do a full restore of all VM’s from the tape to the old SAN to verify all is well.  

And how did the restore go? flawless?

Userlevel 7
Badge +17

While at an IBM facility I saw a product only 3 customers in the world had. It was a chute that would allow tapes to go from one library to another over top of the isle.   I guess not too many people pushed for 40 frame systems to make it become more popular.   lol

 

I just got rid of a 3500 was a very solid machine. 

We had a look at this chute or bridge (don't remember the name of it) for this customer. But we decided against it, because the transport of the tape via this thing was rather slow.

But it was an interesting idea… 😀

Userlevel 7
Badge +6

Nice, how much tapes can it manage?

 

 

For my Qualstar, reading the current specs, it looks like it would hold about 1700 tapes between the main unit and the MEM add-on unit.  I don’t think we every had it completely full as some tapes were always stored offsite in a secured location, but to say it was a lot was an understatement to me.  When I decommissioned it, we had issues with the robot being out of alignment again so we just had it unlock the door and manually removed the tapes.  We took one of those plastic rubbermaid carts and stacked them all up on top of it.  I think we removed 500-600 tapes from it which made for quite the heavy cart.

Userlevel 7
Badge +8

While at an IBM facility I saw a product only 3 customers in the world had. It was a chute that would allow tapes to go from one library to another over top of the isle.   I guess not too many people pushed for 40 frame systems to make it become more popular.   lol

 

I just got rid of a 3500 was a very solid machine. 

Userlevel 7
Badge +17

Yes, I like the TS4500 and TS3500, too. 😎

Had 7 of them with 6 to 12 Frames and in sum nearly 300 Jaguar drives (no LTO), 2 roboter arms each and something around 15000 tapes at one customer. There were no HD frames at this time… Today there would be several frames less.

Userlevel 7
Badge +8

My libraries have 8 LTO8 drives in each currently.  I’ll most likely be at 12 per in the next short while.

They are beasts but the price tag wasn’t too bad for the pair.  It’s all relative when you need PB’s of backup space.

I’ve been a fan of the IBM TS4500 libraries.  you can have 12 Drives in frame one and 16 in each additional frame (up to 128 per library)

I think if you went 18 frames you are a max of 128 lto8 drives and 23k tapes or something crazy. It’s like 700PB compressed.  Even 660 LTO8 Tapes in a single frame with 12 drives is a monster backup system though. I’m happy my SAN’s are great and my FC switches allow me the throughput to really push this all as those tapes will hit their max data rates. 

 

In a previous life I was an IBM SSR. I replaced and repaired these things all the time. They seem complex and that robot gets a workout, but it’s actually all pretty straight forward, and knock on wood other than the odd gripper replacement here and there, they are super durable. 

 

 

 

 

Userlevel 7
Badge +17

Nice, how much tapes can it manage?

 

Userlevel 7
Badge +6

I have two lto8 lib’s running full bore about 24h a day.   This excites me but my gosh, that is a LOT of data. 

 

Data streams will need to be added or sped up significantly. My biggest gripe with lto8 is when you have a VM that spans a few tapes how long a single backup or restore gets. I’d much rather write it to several tapes at once. 

 

Curious when you talk about having a library, how many drives you’re talking about?  I used to manage a Qualstar libarary in a previous role.  It consisted of the library unit with...hard to remember, but I think 8 drives.  I want to say that they were something like LTO5 and LTO6 (could have been LTO4 and LTO5).  It had a turnstile on one side.  We were using Quest NetVault to stage backup data to a SAN and and then it would write off the data from the SAN to the FiberChannel drives in the Qualstar.  Quite the beast.  It eventually got replaced by Dell Avamar/DataDomain and I decommissioned the tape libarary.  My understanding is that the library was something like $1 million when it was purchased.  I’ll have to dig up some pictures if I can find them, but here’s a couple stock photo’s of what it looked like roughly.  It was very cool for it’s time, but was a beast to calibrate if the robot got misaligned with the drives and tape slots.

 

 

Userlevel 7
Badge +8

I was talking about smaller shops. Where they could have the option to go from lto8 to lto10 rather than adding frames.  

 

LTO 10 isn’t even going to be available until 2024 provided there are no shortages of tapes again.  LTO9 isn’t worth an upgrade  unless you are coming from 6 or 7.  

 

Data will increase forever, and in places like mine people want to keep it forever.  The backup windows get long and things cost more money. 

 

Userlevel 7
Badge +17

😂😂😂 I am afraid we will see multiple frame libraries with extremely dense tapes. The amount of data will increase further and further...

Userlevel 7
Badge +8

Data streams will need to be added or sped up significantly.

Yes, I agree.
I know about the planned transfer rate for generation 10 only and it is planned to be increased from 400 MB/sec with Gen9 to 1100 MB/sec with Gen10. So, the transfer rate will increase definitely (BTW: Gen1 had 20MB/sec...)

 

My biggest gripe with lto8 is when you have a VM that spans a few tapes how long a single backup or restore gets. I’d much rather write it to several tapes at once. 

If you keep your backup chains short not that much tapes should be used for a single VM…. And for backup-to-tape you can use more than one tape in parallel….

 

 

I do use multiple tapes at once if the job is backing up several VM’s.   What happens is if a person has lets say 1 monster VM of 80TB, the other jobs will finish and then it will run on one tape for a few days.  While the Speed of LTO8 is quite good,  the data sizes keep increasing to a point that it’s going to take days/weeks to restore VM’s. 

 

That 576TB or 1440TB compressed off one sequential tape is going to be something.   you need a 1.4PB landing/staging area for it haha.

 

The fiber infrastructure alone is going to get pretty expensive to have to keep up with that. Either way I’m excited for it.

Yes 😎 LTO-8 is 5 years old now. Data rate and tape size of LTO-8 does not keep up with the data amount increase we are seeing in the last years.

LTO-14 is at least 10 years in the future. Until then a 1,5 PB staging area is probably a joke… 😎
What had you said in the year 2000 (when LTO-1 was new) to the needed 18 - 45 TB staging area for LTO-9? It was just not imaginable…. At this time the 100 - 200 GB of LTO-1 were huge….

 

True, but LTO-9 is current and still could be faster /larger.  I have VM’s that would already fill a LTO10  or 11 Tape. 


I agree 1.5 PB staging area isn’t reasonable for most, but as someone who has multiple PB of on prem storage it’s not as far off as you think, and not really a joke.  I have 200TB of SSD staging right now from an old decommed SAN I just decided to leave for tape restores and other things where I need a landing area.    It’s off maint so it doesn’t cost me anything and if it dies I never put production / non redundant data to it.

 

At this point the landscape is changed vastly since 2000.  1.4PB tapes DO seem reasonable and imaginable.  The question is can technology keep up with demand.  If not we will end up going back to multiple frame libraries instead of more dense tapes. 

 

 

Userlevel 7
Badge +17

Data streams will need to be added or sped up significantly.

Yes, I agree.
I know about the planned transfer rate for generation 10 only and it is planned to be increased from 400 MB/sec with Gen9 to 1100 MB/sec with Gen10. So, the transfer rate will increase definitely (BTW: Gen1 had 20MB/sec...)

 

My biggest gripe with lto8 is when you have a VM that spans a few tapes how long a single backup or restore gets. I’d much rather write it to several tapes at once. 

If you keep your backup chains short not that much tapes should be used for a single VM…. And for backup-to-tape you can use more than one tape in parallel….

 

 

I do use multiple tapes at once if the job is backing up several VM’s.   What happens is if a person has lets say 1 monster VM of 80TB, the other jobs will finish and then it will run on one tape for a few days.  While the Speed of LTO8 is quite good,  the data sizes keep increasing to a point that it’s going to take days/weeks to restore VM’s. 

 

That 576TB or 1440TB compressed off one sequential tape is going to be something.   you need a 1.4PB landing/staging area for it haha.

 

The fiber infrastructure alone is going to get pretty expensive to have to keep up with that. Either way I’m excited for it. 

Yes 😎 LTO-8 is 5 years old now. Data rate and tape size of LTO-8 do not keep up with the data amount increase we are seeing at the moment.

LTO-14 is at least 10 years in the future. Until then a 1,5 PB staging area is probably a joke… 😎
What had you said in the year 2000 (when LTO-1 was new) to the needed 18 - 45 TB staging area for LTO-9? It was just not imaginable…. At this time the 100 - 200 GB of LTO-1 were huge….

Userlevel 7
Badge +8

Data streams will need to be added or sped up significantly.

Yes, I agree.
I know about the planned transfer rate for generation 10 only and it is planned to be increased from 400 MB/sec with Gen9 to 1100 MB/sec with Gen10. So, the transfer rate will increase definitely (BTW: Gen1 had 20MB/sec...)

 

My biggest gripe with lto8 is when you have a VM that spans a few tapes how long a single backup or restore gets. I’d much rather write it to several tapes at once. 

If you keep your backup chains short not that much tapes should be used for a single VM…. And for backup-to-tape you can use more than one tape in parallel….

 

 

I do use multiple tapes at once if the job is backing up several VM’s.   What happens is if a person has lets say 1 monster VM of 80TB, the other jobs will finish and then it will run on one tape for a few days.  While the Speed of LTO8 is quite good,  the data sizes keep increasing to a point that it’s going to take days/weeks to restore VM’s. 

 

That 576TB or 1440TB compressed off one sequential tape is going to be something.   you need a 1.4PB landing/staging area for it haha.

 

The fiber infrastructure alone is going to get pretty expensive to have to keep up with that. Either way I’m excited for it. 

Userlevel 7
Badge +17

Data streams will need to be added or sped up significantly.

Yes, I agree.
I know about the planned transfer rate for generation 10 only and it is planned to be increased from 400 MB/sec with Gen9 to 1100 MB/sec with Gen10. So, the transfer rate will increase definitely (BTW: Gen1 had 20MB/sec...)

 

My biggest gripe with lto8 is when you have a VM that spans a few tapes how long a single backup or restore gets. I’d much rather write it to several tapes at once. 

If you keep your backup chains short not that much tapes should be used for a single VM…. And for backup-to-tape you can use more than one tape in parallel….

Userlevel 7
Badge +8

I have two lto8 lib’s running full bore about 24h a day.   This excites me but my gosh, that is a LOT of data. 

 

Data streams will need to be added or sped up significantly. My biggest gripe with lto8 is when you have a VM that spans a few tapes how long a single backup or restore gets. I’d much rather write it to several tapes at once. 

Userlevel 7
Badge +17

There is a new article about LTO tape on BlocksAndFiles with some information why the capacity was decreased for LTO Gen 9.

“A person close to LTO said the truth about the LTO-9 capacity reduction “was just to keep the retrocompatibility.” The explanation is this: “It was 100 percent due to technical issues, mainly the track control system.”

If we want to add data tracks (like lines on a vinyl disk), we need to improve the data track control system (which is in charge of keeping the head “in line”) and thus, decrease or even kill the retrocompatibility as the head will not be able to follow two different track control systems. Thus, there’s a choice; less capacity and keep backward compatibility or more capacity and lose it. For LTO-9, IBM chose to reduce the capacity and keep the retrocompatibility..”

https://blocksandfiles.com/2022/09/07/lto-tape-future/

Userlevel 7
Badge +17

Geez...I deployed a Gen7 drive using Gen6 tapes earlier this year.  I don’t have clients with anything NEAR gen 14.  My client with the largest dataset is going to be near the 10/11 area, but they’re not needing to do tape.

Wait the 10 years until we are at Gen14…

10 years ago the data amount we have today was not imaginable, too….

Comment