Skip to main content

Good afternoon all.

I have been working on a project for a client over the past two months which included the client having move over to Veeam from an old outdated backup solution.

The client has an external RDX drive with 2TB RDX cartridges, one for each day of the week.  Currently their backups have been failing as of two weeks ago with the report clearly indicating lack of free space on the RDX cartridge.  When checking, old backups from Feb are still there and have not been purged.

Currently I have a repository setup to the F:\ drive (external RDX drive) which the “Backup Job” saves too.

The job itself is backing up the whole VM from Hyper-V, it has a 3 restore point retention policy, and is set to Incremental with create active full backups on Mon, Tues, Wed, Thurs, and Fri.  All other advanced settings are left as default.

Can someone help me understand why the older jobs from Feb aren’t being purged out pertaining to the retention policy?

Thank you!

Is the repository set with backed by rotated disks?  Also what is the job settings for backups?


@Chris.Childerhose  To your question, yes, the repository is set with “backed by rotated drives” and the drop down is set to “Continue Existing backup chains (if present)”.

 

Backup job settings are as follows:

  • Backing up a Windows VM
  • Storage
    • Backup Repository matches to the one with the rotated drives setting set.
    • Retention Policy:  3 Restore Points
    • Advanced area:  Incremental (recommended)
      • Active Full Backups created Mon-Fri
      • All other tabs/options set as default
    • Guest processing:  Application-aware processing (ON), guest file system indexing (OFF)
    • Run the job automatically at 4:15pm Mon-Fri
    • Retry 3 times
    • Wait before retries 10 min.

Check here as this explains the delete process - https://helpcenter.veeam.com/docs/backup/vsphere/rotated_drives_hiw.html?ver=120

 


Does your job state anything such as “VM XXX is no longer processed by this job”, if so this is your problem. When a VM is no longer processed by this job, Veeam retains its backup data indefinitely (because you’ve asked it to keep retention points and it can’t violate this). You can overrule this behaviour by setting the “Remove deleted items after XXX days” here https://helpcenter.veeam.com/archive/backup/110/vsphere/backup_job_advanced_maintenance_vm.html

 

This protects from a scenario where a VM was accidentally removed from a job’s scope, Veeam will ensure you still have the latest backups of this server available until you say otherwise. This isn’t too bad when you’ve got per-VM backup files but when using per backup job chains, it gets noticeable quickly!


Does your job state anything such as “VM XXX is no longer processed by this job”, if so this is your problem. When a VM is no longer processed by this job, Veeam retains its backup data indefinitely (because you’ve asked it to keep retention points and it can’t violate this). You can overrule this behaviour by setting the “Remove deleted items after XXX days” here https://helpcenter.veeam.com/archive/backup/110/vsphere/backup_job_advanced_maintenance_vm.html

 

This protects from a scenario where a VM was accidentally removed from a job’s scope, Veeam will ensure you still have the latest backups of this server available until you say otherwise. This isn’t too bad when you’ve got per-VM backup files but when using per backup job chains, it gets noticeable quickly!

This is the error it gives when it fails, which makes sense because looking at the drive in file explorer show it nearly full:

 

Processing SERVER Error: There is not enough space on the disk. Asynchronous request operation has failed. orequestsize = 524288] roffset = 4096] Failed to open storage for read/write access. Storage: rF:\Backups\SERVER Backup\SERVER.05173cc8-bcc7-478a-be41-7e6d1023d0D2024-04-24T171750_B6E9.vbk].
Production drive D:\ is getting low on free space (122.2 GB left), and may run out of free disk space completely due to open snapshots.
Error: There is not enough space on the disk. Asynchronous request operation has failed. requestsize = 524288] [offset = 4096] Failed to open storage for read/write access. Storage: wF:\Backups\SERVER Backup\SERVER.05173cc8-bcc7-478a-be41-7e6d1023d0D2024-04-24T171750_B6E9.vbk].
Processing finished with errors at 4/24/2024 5:28:38 PM

 

The cartridges show backups from mid Jan when the backups were initiated, so it’s definitely holding on them, just am not sure why.

 


Have a look at the job statistics please and before the error in the list is where it will output any VMs no longer processed by this job


Comment