Skip to main content

Current LAB setup(all settings inherited from previous host): 
HypervHostB with a private switch 
2 virtual machines on this private switch 
VM1 - ClientPC with windows 10 iso installed
VM2 - PrimaryDC (Veeam restored from HypervHostA to HypervHostB - Session Type is Full VM Restore) - this server has roles(ad fs mgmt, dhcp, dns and gpo repectively)
    - has 2 vm switches, Data: ip 192.168.50.1, subnet 255.255.255.0, gw - 192.168.50.150, preferred dns:192.168.60.240(DC2) and secondary dns:192.168.50.1
                         Voice: 20.20.20.5 subnet:same, gw:20.20.20.1, dns1:PDC, dns2:DC2

Observation:
1.VM2 fired up nicely, AD components such as aduc, domains and trusts, gpo etc all open fine, able to logon with my local and domain AD accounts successfully 
2. Fired up VM1, VM1 picked up IP via dhcp successfully, showing domain name schools.local on VM net adapter
3. Both vm1 and vm2 can successfully ping each other via ip and dns name, nslookup works as well.
4. vm1 is listed in dns on vm2

Checklist(Things i did):
1. VM1 was 2 hours behind - error message, changed to same time as VM2 - same error message
2. Error message with current tcp/ip setup for both VMs - error message
3. Removed DC2 IP(as it is not in test/lab environment) from both VM2 tcp/ip settings - same error message
4. Created static ip for VM1 with DNS only pointing to VM2 while removing clearing secondary dns entry - same error message

Goal: I plan to do an upgrade of my current AD environment from 2012 R2 to 2022 standard or 2025 for both DC1 and DC2. The  current case: 2012 R2 Standard is running on both DC1 and DC2, where DC2 was 250 days old/stale and put offline. These DCs I observed are functioning at the 2003 server DFL, pretty old I know. Everything is working in the environment for years before me(what is not broken don't touch right). However, there is a need now for upgrading to the latest server os, so the plan is either 1. an in-place upgrade path from 2012 R2 to 2016 to 2019 to 2022 or 2025 on DC1 or create a new server with fresh server 2022 or 2025, join to domain, promote to dc and making it (with the required steps of course) new DC1 and demoting the old DC1(VM2). Then create a new DC2 running 2022 or 2025, join it to the domain, promote it to dc and make it a new secondary DC, then raise functional level at the end. Both new Domain controllers using same IPs as the old.

As best practice i always use private switches for my test/lab environments before production.

Your guidance and/or resolution to this issue would be greatly appreciated, blessings.

This is an old blog but a good one on AD restores, see if this helps at all but you may need to get in touch with Support - AD Protection Best Practices: Restoring Domain Controller (Part 2)


VM2 is healthy, based on checks, i just cant join anything in this private switch lab. Did some googling and a plethora of ppl experienced the same issues with no real fix lol, veeam customers. So i have to do this part 2 tutorial and re-do this restore, “hoping” it will work, all that effort to see if something works. I am not sure what is going on, this is an ambiguous error, trust me. So there is no other fix for this issue and it is an uncertain realm where one does not if Veeam restore caused this error, DC2 metadata cleanup is needed or some microsoft mystery bug IDK 

My DC backup has required settings intact:

 


Keep us posted on what happens.


I fixed it, lol i followed no KB article at all. The solution was a METADATA cleanup of dc2 from dc1. Simple removing all entries of dc2 from aduc, sites and service and dns, then ntdsutil to put the cherry on top. 

So my veeam hyperv vm restored AD DC was perfectly healthy. No need for any Part 1 and 2 steps stipulated from Veeam. Thanks any folks. God bless. I can proceed now with my goal.


I fixed it, lol i followed no KB article at all. The solution was a METADATA cleanup of dc2 from dc1. Simple removing all entries of dc2 from aduc, sites and service and dns, then ntdsutil to put the cherry on top. 

So my veeam hyperv vm restored AD DC was perfectly healthy. No need for any Part 1 and 2 steps stipulated from Veeam. Thanks any folks. God bless. I can proceed now with my goal.

Great glad to hear you were able to fix it.  It's a great feeling when you do it yourself 😀 


I fixed it, lol i followed no KB article at all. The solution was a METADATA cleanup of dc2 from dc1. Simple removing all entries of dc2 from aduc, sites and service and dns, then ntdsutil to put the cherry on top. 

So my veeam hyperv vm restored AD DC was perfectly healthy. No need for any Part 1 and 2 steps stipulated from Veeam. Thanks anyway folks. God bless. I can proceed now with my goal.