Skip to main content

Proxmox Virtual Environment: Architecture, Services, and User Tools


 

Proxmox Virtual Environment (Proxmox VE) can be considered an alternative to VMware vSphere, especially for organizations looking for cost-effectiveness, open-source flexibility, and a solid community-driven ecosystem solution.

As interest and adoption of Proxmox VE increased, Veeam announced the integration with Proxmox (planned for Q3 of 2024) on the last VeeamOn, expanding integration alternatives with the leading hypervisors on the market.

So, let's understand Proxmox's architecture, services, and user tools a little better.

Proxmox VE is based on a Debian Linux system, so the Linux kernel is our base here. On top of that, we can run virtual machines and containers.

 

Architecture

The hypervisor layer uses an architecture many may not yet be accustomed to. In a hybrid approach, Proxmox uses QEMU (Quick EMUlator) and KVM (Kernel-based Virtual Machine).

They are closely related but serve different purposes in the realm of virtualization.

KVM is a well-known type-1 hypervisor, and QEMU is a type-2 hypervisor.

QEMU is a generic and open source machine emulator and virtualizer.

From the perspective of the host system where QEMU is running, QEMU is a user program that has access to several local resources like partitions, files, and network cards, which are then passed to an emulated computer that sees them as real devices.

A guest operating system running in the emulated computer (guest VM) accesses these devices and runs as if it were running on real hardware. For instance, you can pass an ISO image as a parameter to QEMU, and the OS running in the emulated computer will see a real CD-ROM inserted into a CD drive.

QEMU also supports virtualization when executing under the KVM kernel module in Linux. In this case, KVM enhances virtualization performance and efficiency in Proxmox VE.

Proxmox VE utilizes QEMU for hardware emulation and KVM for hardware-assisted virtualization, providing good performance and flexibility.

 

Containers support

So, if you have heard about Docker containers, and when you hear for the first time that Proxmox supports containers natively, you might also think about Docker. But this is not the case here. Proxmox supports Linux Containers (LXC) natively but not Docker containers.

The goal of LXC is to create an environment as close as possible to a standard Linux installation but without the need for a separate kernel.

LXC is a userspace interface for the Linux kernel containment features. Linux users can easily create and manage system or application containers through a powerful API and simple tools.

Current LXC uses the following kernel features to contain processes:

  • Kernel namespaces (ipc, uts, mount, pid, network and user)
  • Apparmor and SELinux profiles
  • Seccomp policies
  • Chroots (using pivot_root)
  • Kernel capabilities
  • CGroups (control groups)

Let's focus on Apparmour, cgroups.

 

AppArmour (Application Armor) is a Linux Security module. It is an alternative to SELinux (Security-Enhanced Linux). It enhances the security of Linux systems by confining individual programs (processes) to a limited set of resources and actions they can perform.

ApprArmour is a mandatory access control (MAC) system for processes. It uses profiles to define access control policies for applications. Each profile defines a set of rules that specify allowed actions and resource accesses based on paths, network addresses, capabilities, and more.

Cgroups (Control Groups) are a core feature of the Linux kernel that aims to restrict, monitor, and isolate resource utilization like CPU, memory, and I/O bandwidth by processes, ensuring no individual process monopolizes excessive resources on the host system.

Cgroups are the foundational feature of container technologies, enabling efficient resource management and isolation between containers.

 

Services

A Proxmox VE node runs a couple of services for proper operation. Let's describe some of them.

 

"pveproxy" acts as a proxy server that receives REST API requests from clients, listening on port 8006. If required, this service forwards the request to other nodes (or pvedaemon). This server directly answers API calls which do not require root privileges.

"pvedaemon" is the REST API server defacto. All API calls that require root privileges are done using it. It usually serves requests from pveproxy, which listens to public ports, and runs as non-root users.

"pvestatd" is essential because it is the PVE Status Daemon. It queries the status of all resources (VMs, Containers, and Storage), and sends the result to all cluster members.

"pve-ha-lrm" is the Proxmox VE High Availability Local Resource Manager; every node has an active lrm if high availability is enabled.

"pve-cluster" is the heart of any Proxmox VE installation. It provides a database-driven file system for storing configuration files and replicating them in real time on all nodes. This service also provides a cluster-wide locking implementation, which we use to distribute statistical data to all cluster nodes.

For more information about Proxmox VE Services, refer to:

https://pve.proxmox.com/wiki/Service_daemons#pvestatd

 

User Tools

Let's verify some essential Comand Line Interface in Proxmox VE.

 

"qm" is the tool to manage QEMU/KVM virtual machines on Proxmox VE. You can create and destroy virtual machines and control execution (start/stop/suspend/resume). Besides that, you can use qm to set parameters in the associated config file. It is also possible to create and delete virtual disks.

"pct" is the command-line tool to manage Proxmox VE containers. It enables you to create or destroy containers and control the container execution (start, stop, reboot, migrate, etc.). It can be used to set parameters in the config file of a container, such as the network configuration or memory limits.

"pvesm" (Proxmox VE Storage Manager) handles storage configurations and simplifies managing and maintaining storage resources in Proxmox VE.

"pvecm" (Proxmox VE Cluster Manager) manages and orchestrates cluster operations. It can be used to create a new cluster, join nodes to a cluster, leave the cluster, get status information, and do various other cluster-related tasks.

"pveum" (Proxmox VE User Manager) manages user accounts, permissions, and authentication within a Proxmox VE environment.

"pveceph" provides the necessary tools and utilities for integrating and managing Ceph storage clusters, enhancing the scalability, reliability, and performance of storage solutions for virtualized environments.

"ha-manager" (High Availability Manager) is a resource responsible for ensuring high availability (HA) of virtual machines (VMs) within a Proxmox VE cluster. It provides automated failover mechanisms, resource monitoring, and event-handling capabilities.

"pve-firewall" is the firewall management system provided within the Proxmox VE platform. You can set up firewall rules for all cluster hosts or define rules for virtual machines and containers. Features like firewall macros, security groups, IP sets, and aliases help make that task easier.

 

Graphical User Interface

You can use the web-based administration interface to manage Proxmox environments. It allows users to interact with the Proxmox graphically using menus and a visual representation of the cluster status.

 

 

The GUI features are:

  • Seamless integration and management of Proxmox VE clusters.
  • AJAX technologies for dynamic updates of resources.
  • Secure access to all Virtual Machines and Containers via SSL encryption (https).
  • Fast search-driven interface, capable of handling hundreds and probably thousands of VMs.
  • Secure HTML5 console or SPICE.
  • Role-based permission management for all objects (VMs, storages, nodes, etc.).
  • Support for multiple authentication sources (local, MS ADS, LDAP, . . . ).
  • Two-factor authentication (OATH, Yubikey).
  • Based on ExtJS 7.x JavaScript framework.

For more information, access:

https://pve.proxmox.com/wiki/Graphical_User_Interface

 

Migration from VMware

The integrated Import Wizard for VMware ESXi VMs since release 8.2 enables us to migrate VMs from VMware ESXi to Proxmox directly from the Proxmox web interface.

 

The step-by-step is available at this link:

https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE#Automatic_Import_of_Full_VM

Another option is to use OVF templates. Exporting the virtual machine in the OVF  format and then re-importing it within Proxmox is possible. So, in that case, we aren't only migrating the virtual hard disk and the VM settings.

Another possibility would be to clone the operating system with solutions like Clonezilla. So, we can create an image of the operating system and restore this image within the Proxmox VM.

The other option in the future can be using Veeam Backup and Replication. Since Veeam already allows portability of backups between different hypervisors, restoring a backup from a vSphere VM as a Proxmox VE VM will likely also be possible. Let's wait for news!

 

References

https://www.proxmox.com/en/proxmox-virtual-environment/overview

https://www.qemu.org/docs/master/about/index.html

https://linuxcontainers.org/

https://cloudzy.com/blog/qemu-vs-kvm/

https://apparmor.net/

https://docs.kernel.org/admin-guide/cgroup-v1/cgroups.html

2 comments

Userlevel 7
Badge +21

Thanks for all this detail Luiz.  Looking to get this set up on an older NUC.  Hopefully test it with VBR too. 😎

Userlevel 3

Thanks for sharing

Comment