Skip to main content

Does anyone have experience performing physical-to-virtual Windows MSCS clusters with Veeam VBR?

Some experience, what roles are the cluster providing?

 

Providing quorum is maintained, everything is fine typically using Veeam Agent and restoring as a VM.

 

Which OS version as well?


Some experience, what roles are the cluster providing?

 

Providing quorum is maintained, everything is fine typically using Veeam Agent and restoring as a VM.

 

Which OS version as well?

MS Windows 2008 R2.


As long as it is at least SP1 it is supported….


As @JMeixner says, if it’s supported by Veeam & VMware, it’s fine. There’s always been a bit of a funny thing about VMXNET3 though so might be worth sticking with the E1000 as the emulated NIC. I’d also install VMware Tools prior to migration, so you’ve got all your drivers ready.

 

And of course, if you want to do a test run of this, DataLabs will show you what’s really gonna happen!


As @JMeixner says, if it’s supported by Veeam & VMware, it’s fine. There’s always been a bit of a funny thing about VMXNET3 though so might be worth sticking with the E1000 as the emulated NIC. I’d also install VMware Tools prior to migration, so you’ve got all your drivers ready.

 

And of course, if you want to do a test run of this, DataLabs will show you what’s really gonna happen!

@MicoolPaul  If you have some experience, may you share the high level P2V run down for reference. Thanks.


 I’d also install VMware Tools prior to migration, so you’ve got all your drivers ready.

 

Why on earth has this never occured to me as I perform P2V and V2V migrations to VMware.  Never.  And it totally makes sense.  


As @JMeixner says, if it’s supported by Veeam & VMware, it’s fine. There’s always been a bit of a funny thing about VMXNET3 though so might be worth sticking with the E1000 as the emulated NIC. I’d also install VMware Tools prior to migration, so you’ve got all your drivers ready.

 

And of course, if you want to do a test run of this, DataLabs will show you what’s really gonna happen!

@MicoolPaul  If you have some experience, may you share the high level P2V run down for reference. Thanks.

Hi @victorwu, sorry, busy few days.

 

The run down in general:

Pre-requisites:

  • Install VMware Tools version compatible
  • Ensure no pending Windows or App updates, ensure cluster all patched the same & no overlapping works, whilst Windows Server 2008 R2 won’t be getting any patches unless your client is on the ESU, doesn’t mean they’ve got their clusters patched. Don’t want any VMware Tools dependencies not being met between the cluster nodes. WS 2008 R2 had cluster specific patches, so they should definitely all be the same version…
  • Catalog the nodes & roles being used by the cluster, not just from failover cluster but if anything such as SQL Server.
  • Run the cluster validation check before changing anything and export any reports you can, as migration is a great exercise in people pointing fingers for any issues… I’d toggle the active node where relevant too, so you know that it isn’t actually broken on a particular node, again exporting reports.
  • Determine whether the cluster leverages shared storage, this needs omitting from the Veeam Agent if so.
  • If there’s shared storage, how is it accessed? Be sure to validate there’s a storage migration path for you, if it’s SAS storage, can your entire cluster connect to it for example. If it’s iSCSI/FC, can your virtualisation platform access this infrastructure, and pass it through to the VM. Have these storage platforms for access controls in place? It’s worth exporting any iSCSI IQNs in the event one changes for example.
  • Your SCSI and NIC will of course change and Windows Server 2008 generation is worst than most for handling VMXNET3, you may find the preference to emulate Intel instead for networking. But be sure to backup network config and look at the SCSI layouts of the physical server to see if there were any specific pairings of disk to controller for IO queues.
  • If using iSCSI networking, take note of the NIC details including advanced parameters such as VLAN for any separate iSCSI NICs.
  • Windows Licensing, odds are that P2V is gonna trigger windows activation, be sure you’ve got a valid key from the customer (and if the key is OEM, cover yourself by reminding them that OEM licenses are non-transferable).

With those points covered, for the upgrade process, I’d perform the initial backup, being sure to exclude any cluster disks from the backup. After initial backup, I’d set one of the cluster nodes to maintenance mode/offline gracefully, ensure customer is happy the cluster is still running successfully. Perform final backup and shut down the physical host.

Perform P2V recovery of the node, boot with network disabled, allowing for the multiple reboots for VM hardware installation, IP address reconfiguration etc.

After the reboots are all finished, bring the network mode online again, check the shared resource connectivity is okay where possible, run a report on the cluster to see all nodes are healthy. Return the node to a healthy state in the cluster by exiting any maintenance modes, perform another health check, then finally force the migrated node to be the primary node, and then perform a final report. All should be fine now, onto the next node.

 

Phone out of battery but hope this provides a good checklist and feel free to ask any questions of anything I’ve said! Thanks.


You can do “DIY” Datalabbing to check it’s behavior and in v12 agents something new coming will make this easier to test.


As @JMeixner says, if it’s supported by Veeam & VMware, it’s fine. There’s always been a bit of a funny thing about VMXNET3 though so might be worth sticking with the E1000 as the emulated NIC. I’d also install VMware Tools prior to migration, so you’ve got all your drivers ready.

 

And of course, if you want to do a test run of this, DataLabs will show you what’s really gonna happen!

@MicoolPaul  If you have some experience, may you share the high level P2V run down for reference. Thanks.

Hi @victorwu, sorry, busy few days.

 

The run down in general:

Pre-requisites:

  • Install VMware Tools version compatible
  • Ensure no pending Windows or App updates, ensure cluster all patched the same & no overlapping works, whilst Windows Server 2008 R2 won’t be getting any patches unless your client is on the ESU, doesn’t mean they’ve got their clusters patched. Don’t want any VMware Tools dependencies not being met between the cluster nodes. WS 2008 R2 had cluster specific patches, so they should definitely all be the same version…
  • Catalog the nodes & roles being used by the cluster, not just from failover cluster but if anything such as SQL Server.
  • Run the cluster validation check before changing anything and export any reports you can, as migration is a great exercise in people pointing fingers for any issues… I’d toggle the active node where relevant too, so you know that it isn’t actually broken on a particular node, again exporting reports.
  • Determine whether the cluster leverages shared storage, this needs omitting from the Veeam Agent if so.
  • If there’s shared storage, how is it accessed? Be sure to validate there’s a storage migration path for you, if it’s SAS storage, can your entire cluster connect to it for example. If it’s iSCSI/FC, can your virtualisation platform access this infrastructure, and pass it through to the VM. Have these storage platforms for access controls in place? It’s worth exporting any iSCSI IQNs in the event one changes for example.
  • Your SCSI and NIC will of course change and Windows Server 2008 generation is worst than most for handling VMXNET3, you may find the preference to emulate Intel instead for networking. But be sure to backup network config and look at the SCSI layouts of the physical server to see if there were any specific pairings of disk to controller for IO queues.
  • If using iSCSI networking, take note of the NIC details including advanced parameters such as VLAN for any separate iSCSI NICs.
  • Windows Licensing, odds are that P2V is gonna trigger windows activation, be sure you’ve got a valid key from the customer (and if the key is OEM, cover yourself by reminding them that OEM licenses are non-transferable).

With those points covered, for the upgrade process, I’d perform the initial backup, being sure to exclude any cluster disks from the backup. After initial backup, I’d set one of the cluster nodes to maintenance mode/offline gracefully, ensure customer is happy the cluster is still running successfully. Perform final backup and shut down the physical host.

Perform P2V recovery of the node, boot with network disabled, allowing for the multiple reboots for VM hardware installation, IP address reconfiguration etc.

After the reboots are all finished, bring the network mode online again, check the shared resource connectivity is okay where possible, run a report on the cluster to see all nodes are healthy. Return the node to a healthy state in the cluster by exiting any maintenance modes, perform another health check, then finally force the migrated node to be the primary node, and then perform a final report. All should be fine now, onto the next node.

 

Phone out of battery but hope this provides a good checklist and feel free to ask any questions of anything I’ve said! Thanks.

@MicoolPaul Thanks for your detail information, I will test it.


Great @MicoolPaul 

 @victorwu 

Hi
 In my opinion it is easier to use a traditional migration method.
Prepare two VMs with all the necessary prerequisites for the O.S. Windows 2008 R2 part and the Vmware best practice part.
Place the two VMs in the MS cluster and perform a failover on the new VMs.

Evict physical servers.

Or do you want to play with datalab? 😃

Done! :)

 


You can do “DIY” Datalabbing to check it’s behavior and in v12 agents something new coming will make this easier to test.

 

Rick, any chance V12 will have the capability of testing SureReplica’s on other subnets without having to mess with networks and static routes so that the backup server can find the replica’s in the sandbox at the remote site?  SureBackup works great, but SureReplica sure seems to take a bit more finessing to get the ping test and such to work when the replicas reside at a remote site over a VPN on a different subnet.  I would assume that’s not what you’re alluding to, so it would be great if it could be changed so that the pings can be sourced from the proxy/virtual lab appliance and not the backup server.


@dloseke drop that in the R&D forums as a feature request is my recommendation, at present it’s due to how networking “works”. Veeam creates the static route to the virtual lab for your isolated subnet, but when the network packet is formed and the ARP request is carried out, as it’s in a different subnet, it doesn’t get a response, so it has to use another route, normally your default gateway.

 

I could see value in deploying either some logic into the virtual lab to make “it” the origin of the tests, with VBR just submitting the test parameters to it, therefore the testing can always take place without network manipulations, as the virtual lab will always need a production subnet IP address anyway that VBR can talk to it on.


@dloseke drop that in the R&D forums as a feature request is my recommendation, at present it’s due to how networking “works”. Veeam creates the static route to the virtual lab for your isolated subnet, but when the network packet is formed and the ARP request is carried out, as it’s in a different subnet, it doesn’t get a response, so it has to use another route, normally your default gateway.

 

I could see value in deploying either some logic into the virtual lab to make “it” the origin of the tests, with VBR just submitting the test parameters to it, therefore the testing can always take place without network manipulations, as the virtual lab will always need a production subnet IP address anyway that VBR can talk to it on.

 

I’ll do that.  I was actually astonished to find out that the traffic didn’t originate from the virtual lab and was from the VBR server instead.  I should note that I noted this specific to SureReplica, but my preference is to have the backup server at the recovery site, so SureReplica would work, but SureBackup at the primary site would then have an issue instead.  Anyhow, I’ll take that over to R&D.  Thanks!

 


@dloseke drop that in the R&D forums as a feature request is my recommendation, at present it’s due to how networking “works”. Veeam creates the static route to the virtual lab for your isolated subnet, but when the network packet is formed and the ARP request is carried out, as it’s in a different subnet, it doesn’t get a response, so it has to use another route, normally your default gateway.

 

I could see value in deploying either some logic into the virtual lab to make “it” the origin of the tests, with VBR just submitting the test parameters to it, therefore the testing can always take place without network manipulations, as the virtual lab will always need a production subnet IP address anyway that VBR can talk to it on.

 

I’ll do that.  I was actually astonished to find out that the traffic didn’t originate from the virtual lab and was from the VBR server instead.  I should note that I noted this specific to SureReplica, but my preference is to have the backup server at the recovery site, so SureReplica would work, but SureBackup at the primary site would then have an issue instead.  Anyhow, I’ll take that over to R&D.  Thanks!

 

Please post the link here once you do and I’ll +1 supporting it.

 

The reason it’s driven from VBR is you can run all of your vbs/PoSH scripts etc from the VBR server for your integration testing.


Comment