Replies posted by dloseke
It has to be mandatory in the cloud? Have you think about purchasing a deduplicated storage, like HPE StoreOnce or Quantum? You can have them as Virtual Appliance or Physical Appliance. They say that the dedup ratio is up to 20:1 (being realistically 10:1). So maybe you can fit this 50TB in a 5TB Dedup storage. Find some Extra info here, Quantum has a Dedup Virtual Appliance Community up to 5TB, (20:1 up to 100TB), so you can give it a shot. or HPE StoreOnce, this blowed my mind, the HPE StoreOnce in Azure, look. Br, No, not necessarily. But it was a question they had. That said, I forgot about trying a Dedupe appliance...I have a Quantum DXi trial/free edition that I want to try in my lab but haven’t had a chance. Thank you for that suggestion!
In all seriousness, for archive data, does it need to be on a compellent? Seems like an appropriate time to introduce a cheaper storage tier such as a Synology if the access will be infrequent Yeah, before we consolidated it all to the Compellent, all data was spread across two Synology’s, one with an expansion unit, two Equallogic PS6500’s and a PS6210E. It was nice to bring it all together, but the Archive data was somewhat overlooked as some of it was attached to a DC as a RDM, and two of them to another VM. It was nice to get it all in one place. The smaller of the two Synology’s was repurposed for a backup repo for one of the archives, but I found out today I’m getting errors about the CPU fan failing. We’ll see what shakes out as it is.
I’m not sure if my team has ever had to do a restore on this VM…..if so, it would likely have been an individual file or two, but not the whole VM. I actually don’t have a place to drop the VM to test restoring this one. I’m in a similar situation. I have spun it up in Veeam and pulled files off of a 100TB+ file server, but don’t usually have over 100TB just kicking around to test something. I did test restoring a few 20TB servers from TAPE a few weeks ago to see how long it would take, and it wasn’t great, but it worked at least. I have a client that I installed a Nimble array for last week (HF20 to replace a CS1000 if you care). I will be finishing the VM migration from the old to the new next week and one thing they wanted to try before we tear out the CS1000 is a full restore of all VM’s from the tape server I installed about 6 months ago. I don’t expect blazing fast speeds or anything, but they wanted to make sure it worked which I’m all for testing and I’d
I have got it installed and configured but have not had the change to play. Seems very straightforward though - just copy files to the folder you configure locally and it uploads to Wasabi for you in your NAS account. I am just going to trial it as the costs are pretty steep for this one. I have a normal Wasabi account that I use anyway. $7.99/TB/month is a bit more than the object storage, but I think it’s still a sellable product for me. Right now I’m looking at $7k of disks to increase a clients SAN by about 44TB. Now those disks would pay themselves off in two years in comparison to Wasabi, but I could save a couple bucks per TB and buy CloudBerry Drive (or something similar), but here’s the kicker...the client is using a not quite 3 year old Compellent array that has 4 years of support left on it and has 34 empty 3.5” drive bays. Dell end of sale’d the Compellent line, so while I can get disks now, there’s not a guarantee that I can get them 2 or 3 years from now, at least
This is the first time I read something about Wasabi Cloud NAS. Both great that Wasabi offers such a service/product and that we’re able to backup those via Veeam. Any informations available about how the product is licensed/priced? I read about it for the first time a couple weeks ago, but apparently promptly forgot about it. It wasn’t until I contacted our Wasabi rep to see what options would be for this sort of setup when he informed me about this service, setting off that light bulb in my head. I think it’s relatively new, but given how stable Wasabi has been for me with the regular object storage, I feel good about this product.
I’m happy to have just found this article as I was posting another entry asking if anyone has tried the service out. I have a client that we’d be looking at posting 50-60TB of data here to start if it works out. The alternative would be to use regular object-storage and an app like CloudBerry Drive for accessing it, but I suspect this is going to be a more integrated, better baked solution. My concern with using CloudBerry and a regular bucket is if someone deletes data before the 30 day early delete threshold. That said, for backing up the amount of data I’m looking at, that’s pretty expensive for NAS backups, so I’m going to be investigating using the Veeam Agent for backing up the Wasabi NAS volume….but I’ll need to test things out first of course.
My largest is about a 120TB file server. It’s been rough getting it through error checking backup defrags. Unfortunately, we don’t have enough space on the repo to setup incremental full’s due to size. It’s a process…..and we’re still trying to find a better way for it. Out of curiosity, how long do the backups and health checks, etc take? On my 115TB server I had to turn the health checks off completely. To be fair though, the Veeam SAN had no flash disk or SSD’s so it wasn’t a monster of a SAN. The client here has a large Synology NAS that’s good for something like 90TB of space. No SSD’s as well on this one. I’ve found that SSD’s aren’t as useful in most of our environments for backup performance since it’s a lot of writes and the SSD’s are really better for read caching. If I have to choose between SSD cache or 10Gb connectivity, I’ve found 10Gb to be the better investment.
Oh God 😲! How many time to backup this one ? For me the max was around 10TB. Daily… And the SAP logs at least every hour… Backup is not the big problem after the initial full backup, I am afraid of a complete restore… I have told the VM owner to split their VMDKs into several VMDKs with at most 1 TB size. So we can restore with several sessions and with one for these big VMs…. I have this issue at work with growth, DFS has been a life saver moving stuff but trying to get other people to keep the VMDK’s down. Every time I look someone seems to create 25TB+ VMDK files lol. My biggest issue is Tape. Even with 50 VMDK’s, its still a single VBK file. Due to my many file servers I do weekly to tape as it takes a few days going to 8 drives. If I were to din incremental they usually fail during that time. I like having a weekly full incase I lose a tape or something catastrophic happens. I hope in
My largest is about a 120TB file server. It’s been rough getting it through error checking backup defrags. Unfortunately, we don’t have enough space on the repo to setup incremental full’s due to size. It’s a process…..and we’re still trying to find a better way for it. Out of curiosity, how long do the backups and health checks, etc take? On my 115TB server I had to turn the health checks off completely. To be fair though, the Veeam SAN had no flash disk or SSD’s so it wasn’t a monster of a SAN. I may just need to disable health checks. I don’t like that idea, but the alternative is running checks/maintenance that takes days or weeks and causes several backups to be missed.
My largest is about a 120TB file server. It’s been rough getting it through error checking backup defrags. Unfortunately, we don’t have enough space on the repo to setup incremental full’s due to size. It’s a process…..and we’re still trying to find a better way for it. Out of curiosity, how long do the backups and health checks, etc take? Okay, so I have it split up into three jobs, each with backup up specific disks. Two sets are disks that contain archived data that is rarely accessed. The third is the production data that they are currently working on. The archive disks backup quickly because there’s very little data changing there. The “current” production dataset takes between 3-8 hours to backup depending on how much data changed on the disks. Fortunately (I guess) the users are doing their video editing on their workstations and then upload the final video to the file server, so there isn’t new data every day, but it’s fairly often. Health checks, compa
My largest is about a 120TB file server….which is getting ready to grow as the client is starting to put a lot of 4k video on it. It’s been rough getting it through error checking backup defrags. Unfortunately, we don’t have enough space on the repo to setup incremental full’s due to size. It’s a process…..and we’re still trying to find a better way for it.
@dlosekeWell their independence didn't last long and soon they'll probably be part of Broadcom. So hopefully they won't cut down QA (even more) and try to squeeze more out of less 😬 I think the general consensus is that VMware doesn't know how to operate without some sort of overlord/sugardaddy running the show/investing in them. We’ll see what happens with the Broadcom chapter of their story…..
I agree. I believe @Rick Vanover mentioned something about it a couple months ago as VMware was fully divested from Dell that they were starting to follow a more sane release schedule that was going to allow for better QA and a more stable product release. I’m hopeful for what future versions hold.
I won’t go on a bashing spree of VMware vSphere 7 release, but… it wasn’t good. I will. It was horrible! And it was happy to eat SD card’s alive if you didn’t have it redirecting the scratch to “persistent” disks. As far as I can tell, there was basically no warning on that, and it was unstable to say the least. Thank goodness U3c (?) got things back on track, but it did cause me to be a lot more cautious about new VMware releases which, honestly, since security updates should generally be applied promptly, is a shame. Glad they got things back on track!
One way that I’ve worked around VM’s with too many/too large of snapshots is to Clone the VM. Offline preferred obviously. Once cloned, bring up the new clone and then delete the original. Of course, this will cause a new VM and new backup chain once you add the VM back into the backup server, so it does have its on caveats, but in really bad situations, it may have to be done.I do still have the old vCenter Standalone Converter (it can be found online from non-VMware sources), and it’s very robust, but as noted, it was discontinued and a replacement is in beta. I wasn’t aware of it, but apparently there were some strong security vulnerabilities in the converter which should be avoided. Note however that the new converter only goes so far back as vCenter 6.5U3, so if you want to convert older machines like 6.0 and 5.x, you’ll need to find other means such as the older converter or using Veeam/Veeam Agent.https://www.theregister.com/2022/09/14/vmware_teases_replacement_for_soinsecu
That’s an awesome shirt. Wow, just the speakers and sponsors for that event are wild. I started looking at the sessions and that is a huge event! Maybe next year. I only wish the shirts came in 3XL. I have a couple from previous events and I have to squeeze in 2XL. Clearly it’s a shirt issue, and not a me issue though. ;-)
Login to the community
Log in with your Veeam account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.