Replies posted by dloseke
offsite tapes are all well and good …..but how many people have either a) tested restores or b) actually had to restore in anger, 2-3 years down the line? I have a client that I deployed tape to this summer. Next week I’m going to be replacing their SAN (and upgraded their NAS’s that are used as Veeam Repo’s), but before we pull out the old SAN, they had this crazy/not so crazy idea to do a full restore of all VM’s from the tape to the old SAN to verify all is well. And how did the restore go? flawless? Don't know yet. SAN is getting installed in a couple hours.
Nice, how much tapes can it manage? For my Qualstar, reading the current specs, it looks like it would hold about 1700 tapes between the main unit and the MEM add-on unit. I don’t think we every had it completely full as some tapes were always stored offsite in a secured location, but to say it was a lot was an understatement to me. When I decommissioned it, we had issues with the robot being out of alignment again so we just had it unlock the door and manually removed the tapes. We took one of those plastic rubbermaid carts and stacked them all up on top of it. I think we removed 500-600 tapes from it which made for quite the heavy cart.
So at first I thought that using chocolatey was unnecessary, but then considering how often I end up installing Veeam for clients at times….maybe not so unnecessary……this may actually be very helpful and certainly forces me to think a bit outside of the box. Thanks for sharing!
Basically what others said above, but for me it’s going to depend on what version of SQL is installed on the machines. If it’s an older version (such as pre-2016) and there’s nothing else using SQL deployment, I’d uninstall it, clean up the old DB’s, and then proceed like normal so that it installs SQL Express 2016. IF already on 2016, I’d remove the old DB’s in management studio and delete the DB’s from the filesystem and then create a new database during the installation. Or, the easy button us to just give the database a new name, but that’s going to result in a couple of DB’s on the machine which may no longer be needed, so I’d rather keep things tidy if possible.
In my lab I install new versions right away (even beta’s), I’m an early adopter. Test and play with the new features to see what I’ll use in production. For production use I’m limited by our service provider where we replicate our DR environment to using Veeam replication. They are too slow in updating to the latest versions in my opinion. I’m not afraid to upgrade in the lab. That’s the whole point of the lab. Actually, what I find interesting is how many of us appear to actually have labs. It’s generally few and far between, but it seems like most of us here actually do which might be saying something.
I have two lto8 lib’s running full bore about 24h a day. This excites me but my gosh, that is a LOT of data. Data streams will need to be added or sped up significantly. My biggest gripe with lto8 is when you have a VM that spans a few tapes how long a single backup or restore gets. I’d much rather write it to several tapes at once. Curious when you talk about having a library, how many drives you’re talking about? I used to manage a Qualstar libarary in a previous role. It consisted of the library unit with...hard to remember, but I think 8 drives. I want to say that they were something like LTO5 and LTO6 (could have been LTO4 and LTO5). It had a turnstile on one side. We were using Quest NetVault to stage backup data to a SAN and and then it would write off the data from the SAN to the FiberChannel drives in the Qualstar. Quite the beast. It eventually got replaced by Dell Avamar/DataDomain and I decommissioned the tape libarary. My understanding is that the libra
For me, it was the Google Nest Hub. The expectations of what I thought it could do and what it can actually do was highly mismatched. I guess it may also be due to the fact that it is not as well supported by third party app as the Amazon hub I’m hit and miss on Google devices. I have 5 nest/homes, and a Next Hub (non-Max), and a Nest Hello (doorbell). The system is….okay? Sometimes it’s great, sometimes it doesn’t work that well, sometimes you get REALLY weird results. I also have it paired with several Feit smart light bulbs, generic smart outlets, a couple of generic garage door opener interfaces. Those work great for the most part, but there’s still some quirks.On the same end of the spectrum, I have 8 Wyze camera’s, 4 V2, and 4 V3, and I love the camera’s, but they’re not the most realiable. But I also can’t complain too loudly since they cost me something like $25 - $40 each which really is incredibly cheap. But while they’re supposed to interface with the Google ecosys
Hyper-V, Storage Spaces, SCOM …….you’ll see a pattern emerging ;) Hyper-V is still not impressive to me, but it has grown a LOT. For anyone who has used Hyper-V 2008 R2, it felt soooo hacked together, and while it still has a bit of that feeling in 2016/2019 (and I assume 2022), it’s nothing like it was.
In the past, I had a Microsoft Band. It was Microsoft’s attempt at a Smart Watch. Worked okay, but eventually broke, and Microsoft abandoned the line so I was just dead in the water. Now I use a Fossil 4th Gen (Carlyle) which has it’s own issues, but mine has been fairly trouble free so far. Eventually I’ll need to upgrade though, and will probably end up in the Samsung realm now that they’re finally using Google’s Wear OS. Current disappointment? The Ford F150 Lightning (EV truck). I’ve had a reservation in for nearly 2 years, seen no sign of ordering anytime soon (which is okay, I can wait). The major promise is that the lower capacity/range Pro series was a truck starting at 40k (before Federal tax credits even). Since the Pro is the lower end, less profitable truck, Ford is artificially limiting the number of Pro’s to be built. They did the reservation to order conversion in waves, and the Pro’s “ran out” in something like Wave 3 for the 2022 model year. They just closed
offsite tapes are all well and good …..but how many people have either a) tested restores or b) actually had to restore in anger, 2-3 years down the line? I have a client that I deployed tape to this summer. Next week I’m going to be replacing their SAN (and upgraded their NAS’s that are used as Veeam Repo’s), but before we pull out the old SAN, they had this crazy/not so crazy idea to do a full restore of all VM’s from the tape to the old SAN to verify all is well.
I’ve never used VBAZ and don’t have a lot of need for it, but attended just so that I had an idea of what it is and how it works. I enjoyed it though I did get pulled away for a little bit.That said, I would love to see more on Orchestrator. I’m going to be spinning it up in the lab in a couple weeks here to prep for deploying into production for a client next month.
Not right away, but I don’t think we really have a set standard either. With Veeam, I will upgrade the lab without delay if I have the time. Production is generally a couple of weeks. Clients are the same way...a lot of it is when I can get to it though. I’m also a lot more confident with Veeam’s QA processes than that of VMware and Microsoft.VMware updates….I’ve been bitten on those and tend to be pretty hesitant after the whole 7 U2 debacle. But I feel a lot more confident with U3 and on new deployments or updates, I generally wait a week or two unless there’s some major exploit or something that’s pushing for a faster timeline on the upgrade.
But please keep in mind that you make a configuration backup of your VBR database and copy it to the DR site, too. With this you can setup a VBR server at the DR site and restore the backups. On the other hand, you can move the VBR server to the DR site, too and keep a simple repository server at the primary site for your primary backups. Then you have complete backup environment at your DR site in a disaster. But here you should copy a configuration backup to the other site, too. In case the DR site is destroyed…. This is always my recommendation…..keep the VBR server at the recovery site if possible because it’s easier to recover if there is a full site failure. The VBR server could also be the tape server. Then just have a repo/proxy server at the primary site. At the very least, keep a small repo at the recovery site to send your configuration database too. Also, since you’re keeping the tapes at the recovery site, depending on what data you’re keeping in the local repo at
I have 2x DL360 G9 - 64Gb and 96Gb Memory - 2Tb disk 2x DL360 G7 - 112Gb Memory - 2Tb disks 1x Synology - 30Tb 1x HP Procurve 2848 1x Desktop with 10Tb just used for Backups. Of course, I don't have everything power on all the time. Since I bough the G9, I always use only this one. Only when I need to power on my VCF, vSAN, NSXT nested lab I power on G7 because of memory. Trying to find budged to get more memory to the G9 so I can decommission my G7 and try to sell them. Just a correction on my Synology DS1515+ is 10Tb and not 30Tb(don't know where I get those 30) Because you get an amazing 3:1 DRR on your NAS? ;-)
Reviving the talk of lab power consumption from earlier in the thread, I grabbed myself an Eve Smartplug at the weekend, it’s homekit integrated, has remote power on/off functionality, which still worked via bluetooth to turn back on when I accidentally switched my firewall off during testing 😂 It monitors in realtime and can be queried as much as possible, I like that it doesn’t have any remote servers to call out to, and it doesn’t actually even connect to Wi-Fi to be able to stealthily send/receive any kind of data. Firmware updates are downloaded to your app then sent onto your Eve device etc, really nice from a privacy mindset when the sea of IoT devices is very questionable in this regard… Instead it uses Thread & Bluetooth for all communications 😁 My 24x7 ESXi server costs me around 30p a day, averages 30w total power draw from the wall, with a peak so far of 42w as I’ve got turbo boost etc all disabled and power saving modes defined on my motherboard & within OS.
If using SQL Express, I wouldn’t worry much about SQL. You’ll hit the 10GB DB limit in Express before you would hit any sort of disk sizing issues. If you were using SQL standard, then it would be on a different server and a non-issue. If I were to put anything on a separate disk, it would be the backup repo for the VM’s. That would be my primary concern.
Deleting the DC’s from AD will not remove their DNS entries as I recall. You’ll likely need to manually delete those entries from DNS. You should be able to do this from any DC that is replicating properly. Once those changes have replicated, you shouldn’t see any more entries for those old DC’s. That said, you’ll need to remove them from Active Directory Sites and Services to actually remove the DC’s from AD and again, those changes would be replicated to other valid DC’s in the domain. A graceful demotion is always better than forcibly removing DC’s from a domain, but over the course of this conversation, it sounds as if that’s not an option.I believe if you forcibly remove a DC from a domain, there will be several DNS records remaining, such as the parent record, A record, and if there are any other records that reference the removed DC, those as well.
Really nice topic which mix Veeam and AD knowledge. By luck I never had to restore all DCs of the infrastructure. But I ll keep all this precious advices in a corner. Yeah, I shudder at the thought of restoring all DC’s. I’d honestly consider shutting down all of the DC’s, restoring one authoratively, and then cleaning up AD and building new DC’s for any remote systems. Building a DC is practically throw-away anymore, especially since you don’t have to do metadata cleanups anymore. I have thought of that too - just restoring one DC, cleaning it up and then building others from that. The DC with most likely the latest changes on it (password changes, trust relationships) though does not hold the FSMO roles. So if I started with this, I would have to seize all the FSMO roles. If I start with the server with the FSMO roles, it is at their head office but because most of their users log into VDI desktops at their data center, most likely there will be more
Login to the community
Log in with your Veeam account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.