Veeam Backup for Proxmox – Architecture and Setup (Part 1)
Support for Proxmox from Veeam has been one of the top requests in the last months. Therefore the announcement in May to extend the Hypervisor support to Proxmox Virtual Environment (VE) has been long awaited! This will be the 7th virtualization platform supported by Veeam, in addition to the 3 cloud hyperscalers and the Veeam agents.
The Proxmox integration is currently planned for Q3 of 2024 and v1 will already include some nice features/benefits:
flexible storage options
immutable backups
granular recovery
cross-platform recovery (for example Proxmox <> VMware)
Veeam ‘standard’ performance with CBT, hot add backup and Bitlooker
In part 1 of this blog series I want to give a quick overview of the architecture of Veeam Backup for Proxmox and it’s initial setup.
Disclaimer: All information and screenshots in this blog post are based on an early access release. Until the final release changes can and will occur!
Architecture
The Proxmox integration itself will be enabled with an additional plug-in, which is installed on the Veeam Backup Server.
Besides the Veeam Backup Server and at least a Backup Repository, Veeam Backup for Proxmox will utilize workers. The workers transfer the VM data from the Proxmox VE host to the backup repository; similar to the workers in AHV backup or the backup proxies for VMware. They are Linux based and can be deployed directly from the Veeam console. There should be at least one worker per host in order to utilize hot add transport mode.
Plug-in setup
Nothing special to write about the setup of the plug-in; next, next, finish.
Adding a Proxmox VE host
After installing the plug-in, Proxmox will be available as an additional server in the Virtual Infrastructure tab.
Directly after adding a new Proxmox host, you’ll get asked whether you want to deploy a worker.
Afterwards the Proxmox host including it’s VMs should be visible in the inventory.
Overall the setup and configuration of Veeam Backup for Proxmox isn’t complicated and is very straightforward. In the next blog post I will focus on backup & restore of Proxmox VMs, and also on the migration of workloads from VMware to Proxmox.
Page 3 / 5
@marouen labidi Sorry for answering in english, but for now replication isn’t available for Proxmox VE.
@Freddy86 If the preparation time for the worker takes too long, you could post your feedback in the R&D Forums. This way it might get changed in the next release: https://forums.veeam.com/kvm-rhv-olvm-proxmox-f62/
@regnorThanks for the answer, but when will this feature be available?
@regnorThanks for the answer, but when will this feature be available?
Proxmox has pretty good HA and Replication capabilities by itself. Why do you need that feature in Veeam?
@Bitcircuit we need replication from cluster PROXMOX to another cluster vmware, PROXMOX do only replication in self cluster and need ZFS or Ceph storage and we don’t have thoose requirement!! thats why we need it from Veeam, i’m only asking about this feature in veeam when its available, thans.
@marouen labidi → We have not announce any replication type of capability for Proxmox VE but this and many things have been requested. Nothing to share at this time other than the interest in Proxmox is very strong, which prioritizes new capability priorities.
@Bitcircuit we need replication from cluster PROXMOX to another cluster vmware, PROXMOX do only replication in self cluster and need ZFS or Ceph storage and we don’t have thoose requirement!! thats why we need it from Veeam, i’m only asking about this feature in veeam when its available, thans.
Replicating to a completly other Hypervisor Platform is the dumbest thing i have seen in a while. Why would anyone do that instead of running 2 independent proxmox clusters and replicate them?
Issues with Bootloader and missing drivers may be expected if not any other problems. You would have to convert the drive to vmdk and probably fix Bootloader when switching from Proxmox to ESXi
@Bitcircuit The native replication in Proxmox does not support the direct replication of virtual machines (VMs) between two different Proxmox clusters. Replication is designed to work only within the same Proxmox cluster, where nodes share a common configuration and centralized management. The most frustrating and ignorant thing is when someone talks without understanding the situation.
@Bitcircuit The native replication in Proxmox does not support the direct replication of virtual machines (VMs) between two different Proxmox clusters. Replication is designed to work only within the same Proxmox cluster, where nodes share a common configuration and centralized management. The most frustrating and ignorant thing is when someone talks without understanding the situation.
There are still options to replicate to another cluster depending on the current Cluster Setup but you stated to replicate from Proxmox to vmWare what makes absolutly no sense
@Bitcircuit I believe there’s been a misunderstanding. My inquiry was specifically about global replication with Veeam, including scenarios involving Proxmox. The mention of replicating from Proxmox to VMware was not suggesting it as a direct feature but rather exploring potential workarounds or alternative solutions that Veeam might offer. If you’re focusing on Proxmox to VMware, that’s a different issue and not the core of my question. Let’s clarify: I’m interested in how Veeam can facilitate replication across different environments, not just between two specific systems. Your comment seems to miss this context entirely.
@Bitcircuit we need replication from cluster PROXMOX to another cluster vmware, PROXMOX do only replication in self cluster and need ZFS or Ceph storage and we don’t have thoose requirement!! thats why we need it from Veeam, i’m only asking about this feature in veeam when its available, thans.
Replicating to a completly other Hypervisor Platform is the dumbest thing i have seen in a while. Why would anyone do that instead of running 2 independent proxmox clusters and replicate them?
Issues with Bootloader and missing drivers may be expected if not any other problems. You would have to convert the drive to vmdk and probably fix Bootloader when switching from Proxmox to ESXi
“replicating to a completely other hypervisor platform is the dumbest thing I have seen in a while..”
2 thoughts:
That’s harsh. We try to play nice here on the community
Migration use case, sure. “Regular DR” - I agree it is not practical - exception maybe some want to plan on cloud target as DR (which is a different platform).
I do see a market for an in-operating system type of sync, which can get into new problems of drivers and such. But above all, I think high speed recovery with Veeam goes a long way. There are tricks with what we already have. Agents maybe still, Instant Recovery to Hyper-V Role on Veeam B&R server, etc.
My worker is not passing test connection to pve node (cluster node)
@DecioMontagna → I had some worker tests fail when I didn’t have enough CPU, but that doesn’t seem like this type of error. Is your host nested on another hypervisor?
My worker is not passing test connection to pve node (cluster node)
@DecioMontagna → I had some worker tests fail when I didn’t have enough CPU, but that doesn’t seem like this type of error. Is your host nested on another hypervisor?
No, this host is not a nested vm. I think this is because there is no token ID configured for API access, but the veeam documentation does not describe this as a requirement for the worker
My worker is not passing test connection to pve node (cluster node)
@DecioMontagna → I had some worker tests fail when I didn’t have enough CPU, but that doesn’t seem like this type of error. Is your host nested on another hypervisor?
No, this host is not a nested vm. I think this is because there is no token ID configured for API access, but the veeam documentation does not describe this as a requirement for the worker
Are your workers “hard firewalled” from the VBR server? This would explain the issue, and I can get the documentation updated to reflect these requirements. Thanks @DecioMontagna
My worker is not passing test connection to pve node (cluster node)
Im pretty sure your issue is the hostname that is not resolvable outside of proxmox. pve01.domain.local is probably a default hostname and you cant ping pve01.domain.local from Veeam Server and Worker. If the Proxmox is inside a Nat Network and you have a local dns server, add a entry for it. If its a public reachable proxmox server you can add a a-record to any existing domain like proxmox.domain.xyz and set the hostname accordingly
Not to be snarky but…
Are your workers “hard firewalled” from the VBR server? This would explain the issue, and I can get the documentation updated to reflect these requirements. Thanks @DecioMontagna
Yes, they are in different vlans, but there is no firewall between them, only routing.. I can ping each other from any side.
Im pretty sure your issue is the hostname that is not resolvable outside of proxmox. pve01.domain.local is probably a default hostname and you cant ping pve01.domain.local from Veeam Server and Worker. If the Proxmox is inside a Nat Network and you have a local dns server, add a entry for it. If its a public reachable proxmox server you can add a a-record to any existing domain like proxmox.domain.xyz and set the hostname accordingly
Hi, I dont think so, because on logs, the worker can reach the VBR using hostname, but not the PVE hostname, for me it´s related to the token id to access the pve api.. they aren´t behind a NAT or something, just routing between two vlans, no access restrictions on both sides. the same timeout I get from browser if I try to access the api url, 401 error and nothing is displayed, if I first authenticate to pve proxy url using port 8006, then, the API access is displayed on browser using the same url that worker is trying to access on logs.
Are your workers “hard firewalled” from the VBR server? This would explain the issue, and I can get the documentation updated to reflect these requirements. Thanks @DecioMontagna
What is the default user name and password for workers? Maybe I can log in to make some tests.. locally
Im pretty sure your issue is the hostname that is not resolvable outside of proxmox. pve01.domain.local is probably a default hostname and you cant ping pve01.domain.local from Veeam Server and Worker. If the Proxmox is inside a Nat Network and you have a local dns server, add a entry for it. If its a public reachable proxmox server you can add a a-record to any existing domain like proxmox.domain.xyz and set the hostname accordingly
Hi, I dont think so, because on logs, the worker can reach the VBR using hostname, but not the PVE hostname, for me it´s related to the token id to access the pve api.. they aren´t behind a NAT or something, just routing between two vlans, no access restrictions on both sides. the same timeout I get from browser if I try to access the api url, 401 error and nothing is displayed, if I first authenticate to pve proxy url using port 8006, then, the API access is displayed on browser using the same url that worker is trying to access on logs.
pretty sure it try to reach pve01.domain.local and can’t. therefore it gets a connetion time out
Are your workers “hard firewalled” from the VBR server? This would explain the issue, and I can get the documentation updated to reflect these requirements. Thanks @DecioMontagna
Yes, they are in different vlans, but there is no firewall between them, only routing.. I can ping each other from any side.
I can also sniffer the interface and I see traffic coming into pve node on port 8006, but a timeout is displayed on worker test connection to pve node on VBR server and also on logs “test_connection_service.log”
Im pretty sure your issue is the hostname that is not resolvable outside of proxmox. pve01.domain.local is probably a default hostname and you cant ping pve01.domain.local from Veeam Server and Worker. If the Proxmox is inside a Nat Network and you have a local dns server, add a entry for it. If its a public reachable proxmox server you can add a a-record to any existing domain like proxmox.domain.xyz and set the hostname accordingly
Hi, I dont think so, because on logs, the worker can reach the VBR using hostname, but not the PVE hostname, for me it´s related to the token id to access the pve api.. they aren´t behind a NAT or something, just routing between two vlans, no access restrictions on both sides. the same timeout I get from browser if I try to access the api url, 401 error and nothing is displayed, if I first authenticate to pve proxy url using port 8006, then, the API access is displayed on browser using the same url that worker is trying to access on logs.
pretty sure it try to reach pve01.domain.local and can’t. therefore it gets a connetion time out
I really dont think it is the problem, sniffing the pve interface I can see connection on port 8006 coming from worker IP address.. for me it´s related to the API access (authentication or token)
Seeing the documentation again.. I can see that worker does not support ROUTING… as described here:
Although my worker only has one vNIC added, it maybe can be a problem or bug, I will try to connect the worker on the same vlan of pve node to see what happens.
@DecioMontagna Just to be sure, you’re running Proxmox 8.2 (or later) and installed the host with the offical PVE image?