Skip to main content

Support for Proxmox from Veeam has been one of the top requests in the last months. Therefore the announcement in May to extend the Hypervisor support to Proxmox Virtual Environment (VE) has been long awaited! This will be the 7th virtualization platform supported by Veeam, in addition to the 3 cloud hyperscalers and the Veeam agents.

The Proxmox integration is currently planned for Q3 of 2024 and v1 will already include some nice features/benefits:

  • flexible storage options
  • immutable backups
  • granular recovery
  • cross-platform recovery (for example Proxmox <> VMware)
  • Veeam ‘standard’ performance with CBT, hot add backup and Bitlooker

In part 1 of this blog series I want to give a quick overview of the architecture of Veeam Backup for Proxmox and it’s initial setup.

 

Disclaimer: All information and screenshots in this blog post are based on an early access release. Until the final release changes can and will occur!

 

Architecture

The Proxmox integration itself will be enabled with an additional plug-in, which is installed on the Veeam Backup Server.

Besides the Veeam Backup Server and at least a Backup Repository, Veeam Backup for Proxmox will utilize workers. The workers transfer the VM data from the Proxmox VE host to the backup repository; similar to the workers in AHV backup or the backup proxies for VMware. They are Linux based and can be deployed directly from the Veeam console. There should be at least one worker per host in order to utilize hot add transport mode.

Plug-in setup

Nothing special to write about the setup of the plug-in; next, next, finish.

 

Adding a Proxmox VE host

After installing the plug-in, Proxmox will be available as an additional server in the Virtual Infrastructure tab.

 

Directly after adding a new Proxmox host, you’ll get asked whether you want to deploy a worker.

 

Afterwards the Proxmox host including it’s VMs should be visible in the inventory.

 

Overall the setup and configuration of Veeam Backup for Proxmox isn’t complicated and is very straightforward. In the next blog post I will focus on backup & restore of Proxmox VMs, and also on the migration of workloads from VMware to Proxmox.

@DecioMontagna  - I was getting this socket error as well with the same RESTAPI paths… and while I thought DNS wasn’t my issue either, it definitely was.  Everything appeared resolvable between the PVE hosts and VBR, it was the worker that couldn’t talk to the hosts by NETBIOS name.  My hosts have valid DNS records on the server I specified in the worker config, and I had the PVE hosts added to VBR via the NETBIOS name and not the FQDN.  I removed the PVE hosts from VBR and readded them with the FQDN and redeployed the workers.  As soon as I did that, the worker test worked without issue.

Not sure if you might have the same issue but could be something worth looking at.

 

So, for me, like @Rick Vanover said….it’s always DNS 😆

What is the default username and password of the linux worker? t test de DNS resolution? 


@DecioMontagna  - I was getting this socket error as well with the same RESTAPI paths… and while I thought DNS wasn’t my issue either, it definitely was.  Everything appeared resolvable between the PVE hosts and VBR, it was the worker that couldn’t talk to the hosts by NETBIOS name.  My hosts have valid DNS records on the server I specified in the worker config, and I had the PVE hosts added to VBR via the NETBIOS name and not the FQDN.  I removed the PVE hosts from VBR and readded them with the FQDN and redeployed the workers.  As soon as I did that, the worker test worked without issue.

Not sure if you might have the same issue but could be something worth looking at.

 

So, for me, like @Rick Vanover said….it’s always DNS 😆


thanks..


I don`t have a payed license :(

 

You can still open a case with support.  It is best effort support but hopefully you get an answer.


I don`t have a payed license :(

 


I don’t see any possible issues there. Maybe you can open a case and let support check your logs for those VMs.


The VMs are running on Proxmox cluster. Some of the disks are on ZFS (and being replicated to other  cluster nodes), I think there it has to be RAW, some of them are on Ceph, I think they are QCOW2. The behaviour is the same for all of the VMs.


What kind of virtual disks does this VM have? QCOW2, RAW or VMDK?


Hi Regnor,

yes, I did try it. If the Veeam does not find the VM on the expected host (moning VM and not waiting the 15 minutes), the error is different. I did try to remove and add the Proxmox proxy, but always the same error. 😞 When the VMs are powered on, everything is OK.


@Leela Backup of powered-off VMs is also possible, so in general you shouldn't see any issues here. As we rescan the Proxmox host every 15 minutes, have you tried to backup the VM again after some time?


Hi all,

I am trying to test things out and it looks like the Veeam is only able to do backup of running VMs. If the VM is stopped, I see :

“Failed to perform backup: Failed to connect the NBD server to the hypervisor host ”

Please do you have any idea, what it could be ?

Thanks in advance,

BR,

Leela


 

Are you able to spin up a new 8.2 instance of Proxmox to try?

 

I have another 8.2 cluster here, I will try on this cluster… 

 

Sounds good.  Let us know what happens and if this works.


 

Are you able to spin up a new 8.2 instance of Proxmox to try?

 

I have another 8.2 cluster here, I will try on this cluster… 

 


@DecioMontagna Just to be sure, you’re running Proxmox 8.2 (or later) and installed the host with the offical PVE image?

Yes… 

This host was migrated in-place from 6.x to 7.x a long time ago, then was migrated in-place to 8.x using official migration documentation… 

Maybe something is missing after the upgrades on this host, or there's some kind of incompatibility. Perhaps you can open a case and let our support check.


@DecioMontagna Just to be sure, you’re running Proxmox 8.2 (or later) and installed the host with the offical PVE image?

Yes… 

This host was migrated in-place from 6.x to 7.x a long time ago, then was migrated in-place to 8.x using official migration documentation… 

Are you able to spin up a new 8.2 instance of Proxmox to try?


@DecioMontagna Just to be sure, you’re running Proxmox 8.2 (or later) and installed the host with the offical PVE image?

Yes… 

This host was migrated in-place from 6.x to 7.x a long time ago, then was migrated in-place to 8.x using official migration documentation… 


@DecioMontagna Just to be sure, you’re running Proxmox 8.2 (or later) and installed the host with the offical PVE image?

Yes… 


@DecioMontagna Just to be sure, you’re running Proxmox 8.2 (or later) and installed the host with the offical PVE image?


Seeing the documentation again.. I can see that worker does not support ROUTING… as described here:

 

https://helpcenter.veeam.com/docs/vbproxmoxve/userguide/multiple_networks.html?ver=1 

 

Section:

Example 4. Invalid Configuration

 

Although my worker only has one vNIC added, it maybe can be a problem or bug, I will try to connect the worker on the same vlan of pve node to see what happens.


Seeing the documentation again.. I can see that worker does not support ROUTING… as described here:

 

https://helpcenter.veeam.com/docs/vbproxmoxve/userguide/multiple_networks.html?ver=1 

 

Section:

Example 4. Invalid Configuration


Im pretty sure your issue is the hostname that is not resolvable outside of proxmox. pve01.domain.local is probably a default hostname and you cant ping pve01.domain.local from Veeam Server and Worker. If the Proxmox is inside a Nat Network and you have a local dns server, add a entry for it. If its a public reachable proxmox server you can add a a-record to any existing domain like proxmox.domain.xyz and set the hostname accordingly

Hi, I dont think so, because on logs, the worker can reach the VBR using hostname, but not the PVE hostname, for me it´s related to the token id to access the pve api.. they aren´t behind a NAT or something, just routing between two vlans, no access restrictions on both sides. the same timeout I get from browser if I try to access the api url, 401 error and nothing is displayed, if I first authenticate to pve proxy url using port 8006, then, the API access is displayed on browser using the same url that worker is trying to access on logs.

https://pve01.domain.local:8006/api2/json/nodes → System.Net.Sockets.SocketException (110): Connection timed out

 

pretty sure it try to reach pve01.domain.local and can’t. therefore it gets a connetion time out

I really dont think it is the problem, sniffing the pve interface I can see connection on port 8006 coming from worker IP address.. for me it´s related to the API access (authentication or token)


Are your workers “hard firewalled” from the VBR server? This would explain the issue, and I can get the documentation updated to reflect these requirements. Thanks @DecioMontagna 

 

Yes, they are in different vlans, but there is no firewall between them, only routing..  I can ping each other from any side.

 

I can also sniffer the interface and I see traffic coming into pve node on port 8006, but a timeout is displayed on worker test connection to pve node on VBR server and also on logs “test_connection_service.log”


Im pretty sure your issue is the hostname that is not resolvable outside of proxmox. pve01.domain.local is probably a default hostname and you cant ping pve01.domain.local from Veeam Server and Worker. If the Proxmox is inside a Nat Network and you have a local dns server, add a entry for it. If its a public reachable proxmox server you can add a a-record to any existing domain like proxmox.domain.xyz and set the hostname accordingly

Hi, I dont think so, because on logs, the worker can reach the VBR using hostname, but not the PVE hostname, for me it´s related to the token id to access the pve api.. they aren´t behind a NAT or something, just routing between two vlans, no access restrictions on both sides. the same timeout I get from browser if I try to access the api url, 401 error and nothing is displayed, if I first authenticate to pve proxy url using port 8006, then, the API access is displayed on browser using the same url that worker is trying to access on logs.

https://pve01.domain.local:8006/api2/json/nodes → System.Net.Sockets.SocketException (110): Connection timed out

 

pretty sure it try to reach pve01.domain.local and can’t. therefore it gets a connetion time out


Are your workers “hard firewalled” from the VBR server? This would explain the issue, and I can get the documentation updated to reflect these requirements. Thanks @DecioMontagna 

 

What is the default user name and password for workers? Maybe I can log in to make some tests.. locally 

 


Im pretty sure your issue is the hostname that is not resolvable outside of proxmox. pve01.domain.local is probably a default hostname and you cant ping pve01.domain.local from Veeam Server and Worker. If the Proxmox is inside a Nat Network and you have a local dns server, add a entry for it. If its a public reachable proxmox server you can add a a-record to any existing domain like proxmox.domain.xyz and set the hostname accordingly

Hi, I dont think so, because on logs, the worker can reach the VBR using hostname, but not the PVE hostname, for me it´s related to the token id to access the pve api.. they aren´t behind a NAT or something, just routing between two vlans, no access restrictions on both sides. the same timeout I get from browser if I try to access the api url, 401 error and nothing is displayed, if I first authenticate to pve proxy url using port 8006, then, the API access is displayed on browser using the same url that worker is trying to access on logs.


Comment