Solved

Create more disk space after backup to tape job has completed successfully


Userlevel 1

I am currently using Veeam and Backup and Replication 10.  How do I delete disk backup jobs after I verify the tape backup job has completed successfully.  I’m in the process of ordering more hard drive disk space but my fear is that I delete the wrong stuff that would prevent a restore from completing should I have a need to perform.  Thank you.

 

John N

icon

Best answer by Rick Vanover 27 June 2022, 14:36

View original

9 comments

Userlevel 7
Badge +4

The most correct way to reduce disk-based backup consumption is to have the backup job itself have retention duration set to the times you need it.

Generally speaking, it is not a good idea to go into the disk tree of the user interface and start deleting backups. Further, many bad days have been had if someone were to go to the file path and start deleting files by hand. 

Userlevel 7
Badge +5

Play with retention rules as @Rick Vanover suggested.
If you wanna play with files, you must know what you touch and that can cause corruption to backup chain... and likely 99% of times that’s not a great idea.

Userlevel 7
Badge +8

Yeah I am with @Rick Vanover on this one - the safest and best way is retention points.  When you decide to delete files you cause corruption as noted by @marcofabbri since the VBM file is the one that contains the index to the VBK/VIB files and would not be updated correctly.

Userlevel 1

Thank you Rick.  I have confirmed the data retention period for VMs to NAS was set to 5 days.  The VMs to NAS folder is 4.69 TB out of 5 TB full.  It consists of 3 Veeam full backups and 11 incremental backups going all the way back to 4/7/2022.  I realize I have a disk space problem that forces me to create disk space.  I have ordered 3 - 8TB drives but until they arrive I need to create space so I can run another job to disk then to tape.  Thank you.  

 

John  

Hi JohnN,

having those fulls kills the consumption of space on that NAS. however… it isn’t “bad practice” to have multiple fulls. The potential alternative is to increase your retention but make it an incremental forever.

Keep in mind you’ll still need some free space to drop back that tape-backup in case you need it!

 

In the current retentionpolicy you obligate Veeam to have at least 5restore points:
F i i i i  ----- but in fact it needs more in order to clean up that old full and those increments one day:

F i i i i i F i i i i now this new chain of backups (F +4increments) is your 5minimum and Veeam will clean up the old chain (one week: F and 6increments).

With a forever incremental setting you will inject the oldest increment into the existing full.

F i(1) i(2) i(3) i(4)    will result into: F(including i(1))       i(2) i(3) i(4) i(5)

 

the reason why you have 3fulls is unclear for me with the oldest backup from april (assuming your dateformat is 4/7 as april 7th as it isn’t july 4th yet). So i assume as well that that oldest full was a leftover of something that wasn’t cleaned correctly.

Userlevel 1

Could someone please explain how reducing the retention period will free up space on the nearly full 5 TB share drive.  No disk backups will run until I free up space.  Thank you. 

John    

John, reducing the amount of restore points will be effective to reduce the consumption. However, therefore your job needs to be able to run.

In order to quickly (as drastic measurement) free up space you might try the delete from disk option:

https://helpcenter.veeam.com/docs/backup/vsphere/delete_backup_from_disk.html?ver=110

 

Userlevel 1

I’m trying to start the backup to disk after creating 3.65 TB of free space and now when I start the backup job to disk I’m receiving the error, “Failed to execute Agent management command start Backup” .

Userlevel 7
Badge +8

I’m trying to start the backup to disk after creating 3.65 TB of free space and now when I start the backup job to disk I’m receiving the error, “Failed to execute Agent management command start Backup” .

Might be time for a support ticket to get this resolved.

Comment