New Cluster Node Requires Full Backup

Userlevel 1

We’re trying to do a rolling OS upgrade of two physical servers in a Window Failover Cluster. The roles/resources are two Storage Space disks holding >20Tb of data each. When we add a new node to the cluster, Veeam is insisting on a Full backup of each shared disk, which is not ideal. According to documentation at the link below, this appears to be “by design” but this is very painful and our only option is to significantly expand the size of our backup target.


Is there any way to skip this full backup requirement? We do weekly synthetic fulls, so I added an additional one to the schedule but it still insisted on an actual full, not synthetic.


From a technical standpoint, I could understand needing to reset or ignore CBT and do a scan of the whole disk, but this is the only backup software I’ve used that arbitrarily requires a full backup when cluster node changes happen. Does this affect other cluster workloads like SQL or VMs? I had planned on moving those into our Veeam backup environment but this would be a deal breaker.





Userlevel 7
Badge +14

Does the full backup happen on the existing hosts? It has been a while since I used S2D.

If not:

You could try to backup the OS disks separately from your S2D disks. 

If possible, add the cluster resource as an extra agent and use that host to backup your data disks. 

Userlevel 1

Yes, the full backup occurred before failing the roles over to the new node. I’ve since removed the new node and it’s back to doing regular incrementals.


This is regular Storage Spaces in a Windows Failover Cluster, not Storage Spaces Direct. Similar names, very different architecture.

We are backing up the cluster file servers with a total of 62 TB, and now we are facing the issue that whenever cluster membership changes, the drives are running a full backup each time. Is there any way we can avoid a full backup each time?

Userlevel 1

I opened a case with Veeam for this and received a generic “Open a feature request or discus on the forum” answer. 

Facing the same issue here. Yesterday it happened just because I removed a non-needed resource (generic service) from each cluster role. No dependencies there or anything. Veeam did a full backup of all disks afterwards. 40TB lost in the Repo :-( 

Edit: This happens also each time we patch these clusters now. This might become a real problem. I must admit the old crappy TSM agent-based backup did not have this limitation :-(

@BenJ I’m going to open a case as well for this, but your answer somewhat discouraged me.