Backup job is set to 2 restore point and it has backup job (.vib) file created everyday. There is shortage of space and need to reduce backup jobs. If I choose the restore point it is picking up restore point for a month. Is there is a way to reduce the number of days of backup?
I have only one vbk file which is 5 TB
Please check this KB to see how you have your job configured and also how restore points work - KB1990: Backup Job has Too Many Restore Points - Considerations and Causes
This is the definition of the storage policy, what would be the best way to reduce backup job.
This is the definition of the storage policy, what would be the best way to reduce backup job.
Do you have the restore settings as Points or Days? We have started to use Days now as it seems to work better for retention cleanup, etc. You may want to change that and see how things go.
Sounds good I have changed the restore point to days instead of Points. Will update after the job runs today.
Sounds good I have changed the restore point to days instead of Points. Will update after the job runs today.
Hopefully that addresses the issue at some point. You may need to remove restore points manually if not, but wait and see after a few days.
Couple of things here….what are you using for your repository? If a REFS volume with 64k blocks, you can use link cloning so full backups should not take the entire space twice due to block cloning. That said, as it is configured, if you are trying to keep only 2 restore points as I noted in your other post, you’re going to want to use forever forward incrementals and enable health checks and maintenance on your restore points. Since you have synthetic fulls enabled (Forward Incremental), you’re going to keep more than 2 restore points. You’re going to have up to 9 restore points at a time as you’ll have a Full and 6 incrementals behind it, and then it’ll create another full but still can’t delete the previous chain as that would leave 1 restore point, so it’ll create another incremental and then can delete the previous chain leaving you a singular full and incremental again. Rinse and repeat. That said, I would continue to retention keeping them per day rather than per restore point as I’ve found that easier to manage and understand as well.
Delete them from within the console under the Disk section versus manual. If you decide manual then be sure to rescan the repo afterward.
Can you share a screenshot of your retention? I see weekly synthetic fulls, so I wouldn’t expect it to keep that much data and only have one full backup file.
Do you have an immutability setting active?
Your screenshot shows a weekly synthetic full, so no more than 13 restore points at most are created before deleting the oldest ones. Immutability is the next topic that could play a role for this problem.
No I do not have Immutability backup configured. There were 2 backup job that got created for this storage policy. Somehow prior backup job XX-Backup job was not used and another backup job X5-Backup job was created and is used now. JObs from XX-BAckup job was deleted because of space issue. What are my options here to consolidate the backups.
No I do not have Immutability backup configured. There were 2 backup job that got created for this storage policy. Somehow prior backup job XX-Backup job was not used and another backup job X5-Backup job was created and is used now. JObs from XX-BAckup job was deleted because of space issue. What are my options here to consolidate the backups.
There is no consolidation option for backups. You will need to keep them separate or remove the one that is not needed.
No I do not have Immutability backup configured. There were 2 backup job that got created for this storage policy. Somehow prior backup job XX-Backup job was not used and another backup job X5-Backup job was created and is used now. JObs from XX-BAckup job was deleted because of space issue. What are my options here to consolidate the backups.
I’m assuming that you’re referencing the below post.
With that said, it was believed that the old backup job no longer existed, and the assumption is that the old data was deleted from the “Disk (Imported)” classification. If the job still exists but the data has been deleted and you don’t intend to use the job, I’d remove it. That said, since it sounded like the data was deleted, I’m not sure you mean by consolidating the backups. If the data from the old job no longer exists and you don’t want it anymore so that you can start fresh, then I’d remove the data via the Veeam console in the “Disk (Imported)” location. I am assuming that the server is already being backed up via the new backup job, in which case those restore points would exist under the “Disk” classification. Is under the current backup job that is in use where you’re seeing more than the expected restore points, or are you seeing them under the previous backup job?
In the current backup job there are jobs that are existing from Oct 5th Nov 11th. So jobs are not getting deleted from Oct 5th onwards. There was another folder of backup job that was deleted because of space. In the disk imported Backup job
In the current backup job there are jobs that are existing from Oct 5th Nov 11th. So jobs are not getting deleted from Oct 5th onwards. There was another folder of backup job that was deleted because of space. In the disk imported Backup job
If the backups are not getting removed from Oct 5th onward it could be broken chains causing this and you might need to manually clean them up.
This is found in the GUI interphase. Do we delete from disk and remove from configuration? or else how can the jobs be deleted? Jibs physically does not exists in jobs folder from Jan
This is found in the GUI interphase. Do we delete from disk and remove from configuration? or else how can the jobs be deleted? Jibs physically does not exists in jobs folder from Jan
Use Delete from Disk if you need disk space freed up. If they are not on disk then you remove from configuration.
To use Fast cloning on XFS requires the disk has been formatted with the XFS file system and has reflink enabled. Example format command that requires xfsprogs to be installed: mkfs.xfs -b size=4096 -m reflink=1,crc=1 /dev/sdb
TO use Fast cloning on Refs, you need to format the filesytem with ReFS FS and 64K
After you have decided which FS type you will use, you can create the repo and select fast clone option at the creation time.
A question for you, that is the Veeam version you are using?
Comment
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.