Do you have space to try an active full? I know it will create a new chain but probably the easiest way if there is a chance of a corrupt file.
Additionally, does Veeam have access to the repository?
Additionally, does Veeam have access to the repository?
Yes, Veeam can access the repository without any issues. I also tested directly from the NAS (Synology), where the repository is located, by creating a folder in the directory to check if a write error occurred, but everything is fine.
Do you have space to try an active full? I know it will create a new chain but probably the easiest way if there is a chance of a corrupt file.
I believe I have enough space, but will I retain the history in case I need to restore a VM to its state from 3 days ago?
@flipflip I’d say that’s the best approach as you will still be able to recover from the old chain.
Let us know how you get on.
Do you have space to try an active full? I know it will create a new chain but probably the easiest way if there is a chance of a corrupt file.
I believe I have enough space, but will I retain the history in case I need to restore a VM to its state from 3 days ago?
Yes, you will. This will depend on the restore points you have set in the backup job. If you only have a couple of restore points defined and take multiple backups, then you will lose the backup chain. For example, if restore points are set to 2 and you take 3 backups, there is a chance you will lose the oldest one. Again, this will depend on the settings you have defined.
Okay, I will try to initiate an active full. Is it possible to schedule it within the job, or is it mandatory to launch it manually?
In the advanced settings on the job you can enable Active full backup. Ensure the day is set for when the job runs. Don’t forget to disable the option once the job has ran.
Backup Settings - User Guide for VMware vSphere (veeam.com)
Hello everyone,
This weekend I initiated a backup in Active Full mode, and once again, I encountered the same error.
I will conduct write tests on the hard drives used for the datastore.
@flipflip do you have another device you can use as a temporary repo?
Would be interesting to see if you get the same error writing to a new repo. I suspect this is local to the repo as the Active Full creates a new chain.
It may also be worth raising with support as well, even if you have community edition support is provided on a best endeavours basis.
Hello,
the test on a new datastore on another NAS went well. I'm allowing several backups to be performed to see if the issue recurs.
At the same time, I've just opened a support ticket: 07092821.
Thanks,
Philippe.
Thanks for the update. Out of interest have you cold power cycled the NAS that you’re having issues with?
That may well be worth trying as well.
No, I didn't even think about it I'll give it a try.
Hello everyone,
a quick update following the NAS reboot operation hosting the datastore. The job started, but the same error persists :(
So far, I haven't received any response from support.
The other job on the different datastore continues to work without any issues.
Hello everyone,
Unfortunately, the ticket has just been automatically closed as no one from support has responded :(
I won't have any choice but to break the datastore and restart the backups of my VMs, hoping that the issue doesn't occur again.
@flipflip That happens if you have community edition. You mentioned that they worked to another datastore so it does look to be the target config that's the issue here.
I’d also suggest upgrading to V12.1.
I do encounter the same issue, also being on version 12.1
In my case the NFS is used as a temporary storage aside of my StoreOnce units and the backup chain was newly created there.
As stated in older forum posts with similar issues, it could be either a DNS issue or simply services not responding in a timely manner...
Hello everyone,
Unfortunately, the ticket has just been automatically closed as no one from support has responded :(
I won't have any choice but to break the datastore and restart the backups of my VMs, hoping that the issue doesn't occur again.
Normally a ticket only gets closed automatically when the requester is no longer responding, but not the other way around … that’s not very professional