Skip to main content

Support for Proxmox from Veeam has been one of the top requests in the last months. Therefore the announcement in May to extend the Hypervisor support to Proxmox Virtual Environment (VE) has been long awaited! This will be the 7th virtualization platform supported by Veeam, in addition to the 3 cloud hyperscalers and the Veeam agents.

The Proxmox integration is currently planned for Q3 of 2024 and v1 will already include some nice features/benefits:

  • flexible storage options
  • immutable backups
  • granular recovery
  • cross-platform recovery (for example Proxmox <> VMware)
  • Veeam ‘standard’ performance with CBT, hot add backup and Bitlooker

In part 1 of this blog series I want to give a quick overview of the architecture of Veeam Backup for Proxmox and it’s initial setup.

 

Disclaimer: All information and screenshots in this blog post are based on an early access release. Until the final release changes can and will occur!

 

Architecture

The Proxmox integration itself will be enabled with an additional plug-in, which is installed on the Veeam Backup Server.

Besides the Veeam Backup Server and at least a Backup Repository, Veeam Backup for Proxmox will utilize workers. The workers transfer the VM data from the Proxmox VE host to the backup repository; similar to the workers in AHV backup or the backup proxies for VMware. They are Linux based and can be deployed directly from the Veeam console. There should be at least one worker per host in order to utilize hot add transport mode.

Plug-in setup

Nothing special to write about the setup of the plug-in; next, next, finish.

 

Adding a Proxmox VE host

After installing the plug-in, Proxmox will be available as an additional server in the Virtual Infrastructure tab.

 

Directly after adding a new Proxmox host, you’ll get asked whether you want to deploy a worker.

 

Afterwards the Proxmox host including it’s VMs should be visible in the inventory.

 

Overall the setup and configuration of Veeam Backup for Proxmox isn’t complicated and is very straightforward. In the next blog post I will focus on backup & restore of Proxmox VMs, and also on the migration of workloads from VMware to Proxmox.

@DecioMontagna Just to be sure, you’re running Proxmox 8.2 (or later) and installed the host with the offical PVE image?

Yes… 


@DecioMontagna Just to be sure, you’re running Proxmox 8.2 (or later) and installed the host with the offical PVE image?

Yes… 

This host was migrated in-place from 6.x to 7.x a long time ago, then was migrated in-place to 8.x using official migration documentation… 


@DecioMontagna Just to be sure, you’re running Proxmox 8.2 (or later) and installed the host with the offical PVE image?

Yes… 

This host was migrated in-place from 6.x to 7.x a long time ago, then was migrated in-place to 8.x using official migration documentation… 

Are you able to spin up a new 8.2 instance of Proxmox to try?


@DecioMontagna Just to be sure, you’re running Proxmox 8.2 (or later) and installed the host with the offical PVE image?

Yes… 

This host was migrated in-place from 6.x to 7.x a long time ago, then was migrated in-place to 8.x using official migration documentation… 

Maybe something is missing after the upgrades on this host, or there's some kind of incompatibility. Perhaps you can open a case and let our support check.


 

Are you able to spin up a new 8.2 instance of Proxmox to try?

 

I have another 8.2 cluster here, I will try on this cluster… 

 


 

Are you able to spin up a new 8.2 instance of Proxmox to try?

 

I have another 8.2 cluster here, I will try on this cluster… 

 

Sounds good.  Let us know what happens and if this works.


Hi all,

I am trying to test things out and it looks like the Veeam is only able to do backup of running VMs. If the VM is stopped, I see :

“Failed to perform backup: Failed to connect the NBD server to the hypervisor host ”

Please do you have any idea, what it could be ?

Thanks in advance,

BR,

Leela


@Leela Backup of powered-off VMs is also possible, so in general you shouldn't see any issues here. As we rescan the Proxmox host every 15 minutes, have you tried to backup the VM again after some time?


Hi Regnor,

yes, I did try it. If the Veeam does not find the VM on the expected host (moning VM and not waiting the 15 minutes), the error is different. I did try to remove and add the Proxmox proxy, but always the same error. 😞 When the VMs are powered on, everything is OK.


What kind of virtual disks does this VM have? QCOW2, RAW or VMDK?


The VMs are running on Proxmox cluster. Some of the disks are on ZFS (and being replicated to other  cluster nodes), I think there it has to be RAW, some of them are on Ceph, I think they are QCOW2. The behaviour is the same for all of the VMs.


I don’t see any possible issues there. Maybe you can open a case and let support check your logs for those VMs.


I don`t have a payed license :(

 


I don`t have a payed license :(

 

You can still open a case with support.  It is best effort support but hopefully you get an answer.


thanks..


@DecioMontagna  - I was getting this socket error as well with the same RESTAPI paths… and while I thought DNS wasn’t my issue either, it definitely was.  Everything appeared resolvable between the PVE hosts and VBR, it was the worker that couldn’t talk to the hosts by NETBIOS name.  My hosts have valid DNS records on the server I specified in the worker config, and I had the PVE hosts added to VBR via the NETBIOS name and not the FQDN.  I removed the PVE hosts from VBR and readded them with the FQDN and redeployed the workers.  As soon as I did that, the worker test worked without issue.

Not sure if you might have the same issue but could be something worth looking at.

 

So, for me, like @Rick Vanover said….it’s always DNS 😆


@DecioMontagna  - I was getting this socket error as well with the same RESTAPI paths… and while I thought DNS wasn’t my issue either, it definitely was.  Everything appeared resolvable between the PVE hosts and VBR, it was the worker that couldn’t talk to the hosts by NETBIOS name.  My hosts have valid DNS records on the server I specified in the worker config, and I had the PVE hosts added to VBR via the NETBIOS name and not the FQDN.  I removed the PVE hosts from VBR and readded them with the FQDN and redeployed the workers.  As soon as I did that, the worker test worked without issue.

Not sure if you might have the same issue but could be something worth looking at.

 

So, for me, like @Rick Vanover said….it’s always DNS 😆

What is the default username and password of the linux worker? t test de DNS resolution? 


I’m curious as to how the pve backup works when the VMs are stored on LVM (block storage over FC).  in the Veeam doco it states

“Veeam Backup for Proxmox VE creates a Proxmox VE copy-on-write snapshot of each VM added to a backup job. The snapshot is further used to create a VM backup.”

I know on LVM you can’t create snapshots in proxmox, so how is Veeam getting the point in time for the backup?

 


I’m curious as to how the pve backup works when the VMs are stored on LVM (block storage over FC).  in the Veeam doco it states

“Veeam Backup for Proxmox VE creates a Proxmox VE copy-on-write snapshot of each VM added to a backup job. The snapshot is further used to create a VM backup.”

I know on LVM you can’t create snapshots in proxmox, so how is Veeam getting the point in time for the backup?

 

When you deploy the worker vm, it ask you to specify a compatible snapshot storage for maintain the snapshots if the source vm is stored in a storage that does not support it. If you do not have any storage compatible to specify, it will not work.


The VMs are running on Proxmox cluster. Some of the disks are on ZFS (and being replicated to other  cluster nodes), I think there it has to be RAW, some of them are on Ceph, I think they are QCOW2. The behaviour is the same for all of the VMs.

 

Nobody else having this problem ? I did find out, that when the VM is powered on, the backup starts and then I can shutdown the VM and all is still OK.

Could somebody please explain, what is in this case the NBD server ? The HV host is probably the Proxmox server, or?  “ Failed to connect the NBD server to the hypervisor host”


 

 

Nobody else having this problem ? I did find out, that when the VM is powered on, the backup starts and then I can shutdown the VM and all is still OK.

Could somebody please explain, what is in this case the NBD server ? The HV host is probably the Proxmox server, or?  “ Failed to connect the NBD server to the hypervisor host”

NBD is the “Network Block Device” mode used for the proxy backup to copy data on-the-fly between the pve node and the veeam server. 

 

The pve plugin for veeam still does not support DSA “Direct Storage Access” in order to backup data.

 


Comment