Replies posted by dloseke
I’m not sure if my team has ever had to do a restore on this VM…..if so, it would likely have been an individual file or two, but not the whole VM. I actually don’t have a place to drop the VM to test restoring this one. I’m in a similar situation. I have spun it up in Veeam and pulled files off of a 100TB+ file server, but don’t usually have over 100TB just kicking around to test something. I did test restoring a few 20TB servers from TAPE a few weeks ago to see how long it would take, and it wasn’t great, but it worked at least. I have a client that I installed a Nimble array for last week (HF20 to replace a CS1000 if you care). I will be finishing the VM migration from the old to the new next week and one thing they wanted to try before we tear out the CS1000 is a full restore of all VM’s from the tape server I installed about 6 months ago. I don’t expect blazing fast speeds or anything, but they wanted to make sure it worked which I’m all for testing and I’d
In all seriousness, for archive data, does it need to be on a compellent? Seems like an appropriate time to introduce a cheaper storage tier such as a Synology if the access will be infrequent Yeah, before we consolidated it all to the Compellent, all data was spread across two Synology’s, one with an expansion unit, two Equallogic PS6500’s and a PS6210E. It was nice to bring it all together, but the Archive data was somewhat overlooked as some of it was attached to a DC as a RDM, and two of them to another VM. It was nice to get it all in one place. The smaller of the two Synology’s was repurposed for a backup repo for one of the archives, but I found out today I’m getting errors about the CPU fan failing. We’ll see what shakes out as it is.
It has to be mandatory in the cloud? Have you think about purchasing a deduplicated storage, like HPE StoreOnce or Quantum? You can have them as Virtual Appliance or Physical Appliance. They say that the dedup ratio is up to 20:1 (being realistically 10:1). So maybe you can fit this 50TB in a 5TB Dedup storage. Find some Extra info here, Quantum has a Dedup Virtual Appliance Community up to 5TB, (20:1 up to 100TB), so you can give it a shot. or HPE StoreOnce, this blowed my mind, the HPE StoreOnce in Azure, look. Br, No, not necessarily. But it was a question they had. That said, I forgot about trying a Dedupe appliance...I have a Quantum DXi trial/free edition that I want to try in my lab but haven’t had a chance. Thank you for that suggestion!
Very interesting...I haven’t done anything with Okta but will be deploying VDRO to a client that uses it heavily in a few weeks. Looks quite cool though, and makes me wonder if we’re hitching our wagon the wrong horse as we perform more SSO integrations/federations with Azure AD and Duo.
I seem to recall, I think in the VMCE classes, that if you’re using a deduplicating device, you should allow the device to do all of the work. Failure to do so can result in less than optimal dedupe and compression, and as I recall, can also cause some performance issues. I agree, I’d go with Dell here (or HPE/Quantum, etc when using their devices).
Buckle up kids...this is a long one! TL;DR: Client is on really old hardware. We recommend replacing it. Hardware fails as expected. Veeam and some creative engineering gets them back online. Clients has a hard time paying bill, makes a claim to insurance, does eventually pay us but but we eventually fire them. The long version:Have/had a client (small rural community hospital) that was running on an 7 year old Equallogic PS6100 storage array and 6 year old Dell ESXI hosts - all were retired/donated hardware from other businesses. No hardware warranty/support. Told the client that this was a huge risk to hospital operations. This hardware could fail and take everything down. We gave them a proposal to replace the hardware but they were short on money of course and were going to ride it out without a plan. Nearly a year later, we went back to them to let them know they were at more of a risk. We in fact went on-site, sat down with them and our proposal, and walked them to the
In the above post, I had note I had a client with a very old Equallogic PS4100 array that failed. Here’s some backstory.Client has some old hardware and is trying to not spend money (there’s a theme in these stories). They have a PowerEdge R610, and a PowerEdge R710. The R710 has a failed IDRAC so the server always runs at 100% fans and when you boot it, they have to press a key to continue booting, every time. They had since purchased a replacement motherboard for the server but never get it installed. They also have several Windows Server 2003 and 2008 VM’s….this is about 3 years ago mind you. Some of those boxes are public DNS servers. There’s a lot of home-brewed applications here, but all the developers have since left and nobody knows how the apps work. I replaced their old Cisco ASA firewall with a Barracuda NextGen and managed to find why their network is running slowly (all traffic is traversing their old Cisco Voice router and bypassing fixes throughput). Lot’s of hai
Michael pretty much hit all the points, but I’ll just state what I like to do. This is all assuming that there is 10Gb capabilities end-to-end (servers, switching, storage devices). If I have 10Gb links, and I have enough to separate ISCSI traffic from the VM traffic, I’ll setup trunk ports on one set of the 10Gb NIC’s for Management, vMotion and any VM networks on separate VLAN’s, and then use dedicated 10Gb ports for ISCSI. If I have separate 10Gb ports that I can dedicated ISCSI traffic to separate from vMotion, management and VM traffic, I’d do that, but it sounds like in your case you can’t, but that’s okay. Of course, use jumboframes if possible, but again, make sure that is enabled end-to-end as if you don’t, you’ll end up with packet fragmentation and it will hurt your performance more than you’ll gain.If using the NAS for the backup repo as an ISCSI device, I don’t have an issue with it sitting on the same ISCSI network as your primary storage, buy you CAN separate it to a
I know you mentioned using the license from when it was bought, but I just wanted to note that the 9.5 format(s) are different than the v10/v11 format as well, so if you just purchased licensing or renewed and got a renewal license file, that’s going to be in the v10/v11 format and you’ll need to download the older versions from the my.veeam.com portal, or as noted, contact support. And as others have also said, there’s typically very little reason to stick with such old versions of VBR, although those use cases do exist.
Thanks for sharing this as I may need to look at getting back in to MS certs again. Been a very long time since I have done anything since 2008 R2. 😋😂 It’s funny because my old IT Director from two jobs ago had his MCSE cert on the wall for NT4 as well as an old Novell CNA cert back in the day….this being the era of Server 2003 and 2008. I mean...not THAT outdated, but still….lol.
Agreed, I keep old SAN’s for temporary loaners when production has failed and it’s going to be a bit (such as waiting for a new SAN to arrive), lab gear, etc. But once things hit a certain age and lack of support, they shouldn’t be in production. Sure...you might get a few more years out of them….but is it worth the risk to your business and what’s the cost of downtime once you factor in lost revenue, client frustration, and having employees sitting there twiddling their thumbs while they can’t do anything.
Sockets for my entire production Socket licenses could be an issue, because the sockets of the isolated environment are not licences then… With VUL I think it is not a problem... This is a good question for your Veeam sales person... To be fair….you could be using a complimentary VUL included with perpetual licensing, which would be a likely loophole where I’d say this would be okay.
If your about to migrate the VMs to your production hosts, and if those hosts already see/access the same storage, why don't you just re-register the VMs in your production environment? You can still take a backup/VeeamZIP afterwards. I was wondering this as well. If you have two hosts (or clusers) that all see the same storage, I’m not sure why you wouldn’t be able to unregister the VM from the old host, and reregister it on the new host. We’ve done migrations like that before. Obviously, there’d be a new level-0 backup happening on the new cluster, but since nothing is shared, I’m sure that’d be the case anyway. Unless I’m not understanding the infrastructure configuration here.Edit: Nevermind…..the volumes aren’t presented to both clusters...some volumes to on cluster, others to the other. In that case, you could still unregister, remove the volumes from the old hosts, present to the new hosts, mount the volumes (but don’t overwrite/format them) and then register the VM’s.
The volumes are mapped to a different host cluster on the SAN. Totally separate environment. I recall something about separate clusters can not see a VMFS datastore created on another cluster / vCenter. I tested to see and I was able to map the volumes but it looked like an empty fresh volume with no VMFS datastore in it. Okay, yeah, that’d be an issue if the volumes aren’t presented to both clusters. And since the clusters don’t talk to each other, that could be an issue as the second cluster to connect to the volume is going to see those as foreign datastores. I’m not sure off the top of my head if it’s going to like that…..you can probably mount the datastore without resignaturing, but I think you’re really opening up yourself to datastore corruption.
I cannot recover my backups with Veeam due to errors? My backup chain is 500 files using Forever Forward with no active/synthetic full backups. DOH 🤣 I’ve run across this with clients that we onboarded where their previous provider was using StorageCraft ShadowProtect and ImageManager wasn’t configured to maintain their restore points. That backup chain gets really long, silently fails, takes a long time to restore, and just stops working. I can’t imagine forever forward without periodic fulls to assist.
The two stereotypical but not false excuses always have to do with RAID and snapshots. That said, I hear a lot less excuses for not doing it and just see it more often to where folks just don’t think about it, or just forget about it. The only other excuses I tend to hear revolve around cost of licensing a hardware.
Biggest challenge for me was, and still is for most people, understanding the networking requirements for SureBackup. Not so much the complexity but the terminology ...”Production network” doesn’t mean your actual production network, means what you consider to be classed “Production network” in your virtual lab setup ...took me a while to get my head around that SureBackup and SureReplica can be difficult at times and what you mentioned I found to be difficult to grasp at first as well. I find it hardest (and still struggle) when running the VirtualLab at a different site/network than the server running the backup role because the backup server doesn’t know how to route to the remote site/network to get to the proxy appliance running the sandbox to verify the restored VM’s. I put in a feature request a few months back to see if it can run these tasks from the appliance rather than the backup server to get around the routing issues.
Are you guys planing an online or in person event? its lovely to see some interest around your community! if possible, also time schedule and so on, I would love to attend, (if is in person, help me out finding a sponsor for the flights and hotel, 😂) Br, Luis. I do all of these events online currently. I haven’t enough time/bandwidth to do much more. Plus, I’m actually in the US, so VUG Canada events aren’t convenient for me to travel to, let alone some of the VUG US events which I’m fairly new to. To be fair, I had a VMUG event happen something like 4 blocks from my office last month that I planned on attending but a family emergency kept me at home that day….
Login to the community
Log in with your Veeam account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.