Wow Gen14. Sheesh hope I get the chance to use it before I retire.
Yes, I like the TS4500 and TS3500, too.
Had 7 of them with 6 to 12 Frames and in sum nearly 300 Jaguar drives (no LTO), 2 roboter arms each and something around 15000 tapes at one customer. There were no HD frames at this timeā¦ Today there would be several frames less.
offsite tapes are all well and good ā¦..but how many people have either a) tested restores or b) actually had to restore in anger, 2-3 years down the line?
Ā
I have a client that I deployed tape to this summer.Ā Next week Iām going to be replacing their SAN (and upgraded their NASās that are used as Veeam Repoās), but before we pull out the old SAN, they had this crazy/not so crazy idea to do a full restore of all VMās from the tape to the old SAN to verify all is well.Ā Ā
And how did the restore go? flawless?
Wow Gen14. Sheesh hope I get the chance to use it before I retire.
Gen14 will takeā¦. no idea - 10 - 12 years. It took from 2010 to 2021 from Generation 5 to Generation 9...
Tapes are still (and will be for a long time) a pretty solid solution against some threats.. and nearly the only solution for āreal offsiteā.
I think the roadmap is more or less āsubject to be changedā because LTO9 was also planed with 24TB capacity and now there is a compromise between costs and capacity; this can happenĀ again.
We are now moving to LTO9 (from 6) and I am locking forward to it.
Yes, moving from LTO-6 to LTO-9 is the next step at two customers of mine, too
With this their tape libraries are still sufficient and have not to be extended with more frames for a longer time...
Geez...I deployed a Gen7 drive using Gen6 tapes earlier this year.Ā I donāt have clients with anything NEAR gen 14.Ā My client with the largest dataset is going to be near the 10/11 area, but theyāre not needing to do tape.
Wait the 10 years until we are at Gen14ā¦
10 years ago the data amount we have today was not imaginable, tooā¦.
There is a new article about LTO tape on BlocksAndFiles with some information why the capacity was decreased for LTO Gen 9.
āA person close to LTO said the truth about the LTO-9 capacity reduction āwas just to keep the retrocompatibility.ā The explanation is this: āIt was 100 percent due to technical issues, mainly the track control system.ā
If we want to add data tracks (like lines on a vinyl disk), we need to improve the data track control system (which is in charge of keeping the head āin lineā) and thus, decrease or even kill the retrocompatibility as the head will not be able to follow two different track control systems. Thus, thereās a choice; less capacity and keep backward compatibility or more capacity and lose it. For LTO-9, IBM chose to reduce the capacity and keep the retrocompatibility..ā
https://blocksandfiles.com/2022/09/07/lto-tape-future/
I was talking about smaller shops. Where they could have the option to go from lto8 to lto10 rather than adding frames.Ā Ā
Ā
LTOĀ 10 isnāt even going to be available until 2024 provided there are no shortages of tapes again.Ā LTO9 isnāt worth an upgrade Ā unless you are coming from 6 or 7.Ā Ā
Ā
Data will increase forever, and in places like mine people want to keep it forever.Ā The backup windows get long and things cost more money.Ā
Ā
Nice, how much tapes can it manage?
Ā
While at an IBM facility I saw a product only 3 customers in the world had. It was a chute that would allow tapes to go from one library to another over top of the isle.Ā Ā I guess not too many people pushed for 40 frame systems to make it become more popular.Ā Ā lol
Ā
I just got rid of a 3500 was a very solid machine.Ā
While at an IBM facility I saw a product only 3 customers in the world had. It was a chute that would allow tapes to go from one library to another over top of the isle.Ā Ā I guess not too many people pushed for 40 frame systems to make it become more popular.Ā Ā lol
Ā
I just got rid of a 3500 was a very solid machine.Ā
We had a look at this chute or bridge (don't remember the name of it) for this customer. But we decided against it, because the transport of the tape via this thing was rather slow.
But it was an interesting ideaā¦
canāt wait the LTO14! I just hope that the humidity conditions will not drop like with LTO9. more capacity but more sensitive!
I am convinced LTO development is pushed by hyperscaler (thx aws...) . we can consider that the majority of the data is cold so the power consumption is reduced with the use of tape.
I have quantum librairies (I6000) that are 15 years old and still updated with news drives etc, quite a profitable investment. I hope drives connectivities will be increase more than 8gb/s in new generation.
I love new object storage solutions with direct to tape or tiering.Ā Welcome replication between sites and performances with tiering!
With LTO10 being about 1100MB/s itās tough to say if there will even be a requirement to go above 8 gig. Based on averages of previous gens, LTO13 will be about 3300MB/s, which means 14 could end up between 3300-5500.Ā Ā still no requirement for 16 gig fiber but Iād assume they switch by then as 8 will be obsolete for most devices.Ā Ā I really donāt like toĀ have more than 2 different speeds in my fabric if possible but it seems tape is that last 8 gig device kicking around :)
Ā
Mhh, no need for the fibre connection to one tape drive.
But in normal cases several tape sessions run over one fibre connection of a server. So, you could be in need of faster connections thereā¦.
And with disk storage with fibre connections this a complete different case, tooā¦.
Mhh, no need for the fibre connection to one tape drive.
But in normal cases several tape sessions run over one fibre connection of a server. So, you could be in need of faster connections thereā¦.
Right, but that isnāt limited right now.Ā I run 32 and 16 for most of my servers currently. I have 16ās in my tape proxy servers.Ā Ā
Ā
BertrandFR said ā. I hope drives connectivities will be increase more than 8gb/s in new generation.ā
I personally donāt think it will be required until 15, but it might get implemented in 13\14 due to fiber having a negative effect if you are mixing speeds more than 2 generations apart.Ā Ā Slapping a bunch ofĀ 8 gig LTO tape drives to 64gb fiber in the servers is not best practice.Ā
Ā
Ā
Yes, all ok, I am with you. My statement was about a SAN with all 8 or 16 or whatever connections.
Mixing isnāt a good idea at all...
In such an environment it is no problem to have several connections to tape drives on on server connection...
offsite tapes are all well and good ā¦..but how many people have either a) tested restores or b) actually had to restore in anger, 2-3 years down the line?
Ā
I have a client that I deployed tape to this summer.Ā Next week Iām going to be replacing their SAN (and upgraded their NASās that are used as Veeam Repoās), but before we pull out the old SAN, they had this crazy/not so crazy idea to do a full restore of all VMās from the tape to the old SAN to verify all is well.Ā Ā
That is a crazy/not so crazy idea I may have to try.
Ā
Iāve made and presented some educated guesses with testing about how long a REAL DR situation will take us to get functional, semi functional and fully back to normal.Ā Between SRM, Veeam, restores and different scenarios wither itās site down, ransomware etc.Ā
Ā
I often keep our old SANās for temp space, landing areas for things, āUnsupportedā risky areas for people to stack things up in testing and labs etc.Ā Iāve even gone as far as running backups on these unsupported areas, but warn them that things can go south in a hurry if disks or controllers fail and at your own risk.
Ā
What I haven'tĀ done is a full tape restore of our production environment and timed it.Ā Ā The restore alone will take quite a bit of time, and that doesnāt confirm anything is going to work when booted (apps talking to DCās, talking to DBās etc) but you could verify the tape jobs and have a time frame.Ā
Ā
Itās a good way to test the load on your servers and make sure your Veeam, SAN, NW and fiber infrastructure can handle it as well as that is a ton of data.Ā When I started at a previous job they did SRM āTestsā all the time and they passed with ease but never had to use it. One day I set up a test VM and volume with SAN replication like they had. Created a protection group and figured Iād flip the VM to the other side and it failed so bad it broke a few things. It ended up being wonky networking that wouldnāt show up in the testing. Iād rather know this BEFORE being in a critical situation and spending the time fixing issues that didnāt need to be there.Ā
Wow Gen14. Sheesh hope I get the chance to use it before I retire.
Gen14 will takeā¦. no idea - 10 - 12 years. It took from 2010 to 2021 from Generation 5 to Generation 9...
Well if it takes 12 I will have 5 years to play with it.
Over 1PB per tapeā¦ thatās absolutely insane! Itās a shame thereās no years stamped against this roadmap, but itās exciting to see the life we still have left in tape! People have well over a decade to keep telling us tape is gonna die, and we have this roadmap to reply with!
Over 1PB per tapeā¦ thatās absolutely insane! Itās a shame thereās no years stamped against this roadmap, but itās exciting to see the life we still have left in tape! People have well over a decade to keep telling us tape is gonna die, and we have this roadmap to reply with!
People are telling tape is dead for 30 years or more nowā¦ the same with mainframesā¦
There will be still usecases for this for a long time.Ā It's not useful everywhere and not as the only storage without disk and objectā¦
offsite tapes are all well and good ā¦..but how many people have either a) tested restores or b) actually had to restore in anger, 2-3 years down the line?
Geez...I deployed a Gen7 drive using Gen6 tapes earlier this year.Ā I donāt have clients with anything NEAR gen 14.Ā My client with the largest dataset is going to be near the 10/11 area, but theyāre not needing to do tape.