VeeamON 2024 - Use Code "COMMUNITY10" for 10% Off!
Found some in this article: https://masteringvmware.com/how-to-upgrade-vcenter-server-7-to-vcenter-server-8-step-by-step/ The screenshot under Step 9 of this article is what I was referring to. We use the hostname as the “VM Name” and caused the confusion/concern on my part. That is actually just the label that is shown under the list of VMs in vCenter and NOT the FQDN. The screenshot under Step 19 once in Stage 2 confirms what @MicoolPaul and @Chris.Childerhose were saying.
@MicoolPaul screenshot of the window where they ask for the name? I am already in Stage 2 so I don’t think it’ll be possible but I’ll see if I can find one online.
@MicoolPaul really? That’s a bit confusing. The installer screen did mention the temporary IP but it also asked for a new name for the VCSA that’s why I thought it would change the FQDN. I’m guessing that what’s really happening then is that the “label” shown in the VCSA would change to whatever I gave it (a very unimaginative VCSA80 to indicate the version) but ti would still have myVCSA.domain.local as it’s FQDN and therefore would not really affect the backup jobs since they do look for myVCSA.domain.local. Cool! One learns something new everyday :-)
@JMeixner oh thanks. Yes, we definitely can re-scan and the the same creds are being used. We’ll give that a go, otherwise I guess we can always power down the new vCenter and power the old one back up and it should work? At least that’s the “back-out” plan from the VCSA installer. Cheers!
Thank you for the quick reply @coolsport00 . However, I can’t find the wizard to change the name. I can change credentials for the existing vCenter but no way to change the hostname.
Wow! What an awesome community this is! Thanks to everyone who provided their inputs. The amount of information is overwhelming (but definitely appreciated) so I had to take a moment to try and digest all of it (that and the fact that our 10G switches which had 10G uplinks recently were replaced with 2x40G uplinks so got pretty busy). First off, to clarify, our Synology is currently 1G BUT we can add 10G cards to it so I guess let’s just assume that they are all 10G end-to-end (switch ports are 10G as well just to be clear). Secondly, looking at the monitoring tab on vSphere vCenter, our busiest physical servers (the one that hosts AD/DNS and the one that hosts a SQL VM and a web VM), have never exceeded 60Mbps combined TX-RX. Third, we use vMotion (on running VMs) between physical hosts connected to the same NetApp (so we don’t actually move the storage, just the compute resource). For those curious why, it’s because we have a 2nd server room on the other end of the campus that contai
Is it recommendable to use just the 2 x 10Gbps NICs for everything (but traffic still split into port groups / VLANs)? Would save us a ton of cabling ...
Hi Chris, If I recall the documents correctly, for NFS, Veeam recommends Direct NFS. Is this what you mean by direct from storage SAN mode?
Hi MicoolPaul, Our backup storage is a local Synology running a VM acting as immutable storage + a cloud repository from a Veeam provider.
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.