Skip to main content

Tried looking for this, but does a VM have to have a minimum version for CDP Replication?  Turning up CDP for the first time for two VM’s and getting the below error.  Or am I going about this wrong?

 

6/27/2022 4:58:31 PM :: VM configuration for the initial sync completed with errors Error: The virtual machine version is not compatible with the version of the host 'esxi02'. (The version of the IO Filter(s) 'VEE_bootbank_veecdp_11.1.94-1OEM.700.1.0.15843807' configured on the VM's disk are not compatible with the one installed on the destination host.): first occurrence at 6/27/2022 3:52:02 PM, last occurrence at 6/27/2022 4:59:02 PM, 91 retries  

6/27/2022 5:01:17 PM :: Failed to configure source disks Error: The virtual machine version is not compatible with the version of the host 'esxi01'. (The version of the IO Filter(s) 'VEE_bootbank_veecdp_11.1.94-1OEM.700.1.0.15843807' configured on the VM's disk are not compatible with the one installed on the destination host.)  
 

 

Source Host:  ESXi 7.0.3 19482537

Destination Host:  ESXi 7.0.3 19898904

VM Hardware version’s: 8 & 11

Final update….I have it now working.  It turns out that the errors I was getting after resolving the above had to do with the VM not being attached to the disks.  For instance, the VMX was pointing to SERVER1-interim.vmdk but I don’t believe those disks existed for some reason or had a different name (don’t recall exactly what happened).  When I tried to edit the VM to remove the disks and then reconnect to the disks, it wouldn’t connect.  In the end, I blew away the VM, deleted the disks and remaining files from the datastore and then reseeded/replicated the VM from scratch and it began working normally. 

Now it’s just a matter of fine tuning the RPO policy and when I get alerts.  I got a LOT of failed and success emails the first day or two because I’m attempting a 15 second RPO and it’s having a hard time keeping up at times, so I may have to back it off a bit.  Which is fine...with snapshot replication, we had these at every 4 hours before implementing CDP, so a 1 minute or 5 minute or even half hour or so RPO would be acceptable in the case of these particular VM’s.  I did end up changing my warning and failure alerting period to 2 minutes and 5 minutes respectively so my emails don’t get quite so blown up.

So in summary, it appears that the root cause of the issue was that the IO Filter versions didn’t match between the two locations even though the Veeam console was showing that they were up to date when managing the IO Filters.  Putting the hosts in maintenance mode was critical here, but it’s a bit misleading as it apparently queues up the changes waiting for the hosts to go into maintenance, but the console shows it either passed or failed, and appears to only be able to be applied at the cluster level and cannot be managed per host.  So that’d be a suggestion to clean that up a bit going forward in future versions, but for the first crack at it, I’m impressed.  I can’t speak to if the version of the virtual hardware was an issue or not as upgrading didn’t resolve the issue.  Everything else appears to have been the after-effect of the failed replications after the initial seed which were resolved by starting the syncs over from scratch.


Hi, I can’t find it documented but as CDP requires ESXi host versions 6.5 and above, I’d say it’s version 13, if someone has a documented link I know I’d love to see it!

That’s along the lines of what I was thinking as well, but the VM’s are in production now so I can’t shut them down to upgrade at the moment.  I’ll try it and see what happens after hours.  Would be good to add to the documentation once we know that’s the case.


Hi, I can’t find it documented but as CDP requires ESXi host versions 6.5 and above, I’d say it’s version 13, if someone has a documented link I know I’d love to see it!


Source for VM/ESXi matrix:

https://kb.vmware.com/s/article/2007240
Source for minimum ESXi host for CDP:

https://helpcenter.veeam.com/docs/backup/vsphere/platform_support.html?ver=110#virtual-infrastructure


Thanks for this interesting update @dloseke!


Just as a quick update, part of the issue appears to have been not tied to the VM hardware version as I tried performing incremental updates to v13 and up but that made no difference.  It actually appears that the new hosts I had installed installed the IO Filter with a newer version than that of the old hosts. I had tried to update the source hosts and they indicated that they had updated but it appears that the IO Filter update was simply pended successfully as those hosts were not in maintenance mode.  I had to evacuate VM’s from my hosts at the source VM’s, place the hosts into maintenance mode, and then the IO filter on each seemed to update, even though Veeam said the drivers were up to date on both hosts in both clusters.  See screenshot below of the IO Filter version for my source and destination hosts/cluster.  I also performed some patching on these hosts so that my source and destination hosts matched as the destination hosts were just installed so they had the latest of everything.  That said, I then got some errors about the storage policies on the datastores of the VM’s but was able reapply the policies and that error I believe went away.  That said, I’m still getting some errors (don’t recall which) and am working through this but had to call it a night.  I’ll post back when I have more to report.

 

 


Comment