What are the best practices for agent base Linux systems data backups of larger than 2TB volume without any failure? As well as backup of NFS volume?
Hi,
Veeam agent for linux supports maximum size of 218TB, so you can proceed with taking backup of your >2TB linux machine.
Ensure that you have enabled CBT ( changed block tracking).
In backup job enable larger streams, higher parallelism. This may need you to check your proxy,VBR, repository specifications to be increased to accomodate more streams.
If the backup workload is high generally, it would make sense to identify a lean period and kick-off this machine backup.
If for some reasons the backup is taking too long, then you can break it down by taking specific mounts as volume backups spread across different backup jobs.
You can go through agent guide: https://helpcenter.veeam.com/docs/agentforlinux/userguide/architecture.html?ver=50
Is the Linux server a VM or physical?
As for an NFS share you could use the native NAS backup functionality if you can’t protect the NFS server natively such as if it’s a NetApp presenting the share
To protect your NFS shares you can use the Veeam NAS file backup job. Set you NFS server up in the inventory section as a filer, or if it’s a NFS file server set it up as a File server. You can also set the filer up to allow Veeam to integrate with the snapshots and backup from there. Define a few “agent” proxies then go configure the NAS backup to backup the files, or file shares you wish to protect.
https://helpcenter.veeam.com/docs/backup/vsphere/file_share_support.html?ver=110
Comment
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.