Skip to main content

 

Proxmox Virtual Environment (Proxmox VE) can be considered an alternative to VMware vSphere, especially for organizations looking for cost-effectiveness, open-source flexibility, and a solid community-driven ecosystem solution.

As interest and adoption of Proxmox VE increased, Veeam began supporting Proxmox within Veeam Backup and Replication, expanding integration options with the market's leading hypervisors.

So, let's better understand the architecture, services, and user tools that Proxmox provides.

Proxmox VE is based on a Debian Linux server distribution implemented on a specially optimized Linux kernel. It allows you to run virtual machines and containers simultaneously.

Architecture

The hypervisor layer uses an architecture many may not yet be accustomed to. Proxmox uses QEMU (Quick EMUlator) and KVM (Kernel-based Virtual Machine) in a hybrid approach.

They are closely related but serve different purposes in the realm of virtualization.

KVM is a well-known type 1 hypervisor, technically a kernel module that turns Linux into a hypervisor. QEMU is a type 2 hypervisor, generic, open-source machine emulator, and virtualizer.

From the perspective of the host system where QEMU is running, it is a user program with access to several local resources like partitions, files, and networks. These resources are then passed to an emulated computer (Guest VM) that sees them as "real devices".

A guest operating system on the emulated computer (Guest VM) accesses these devices as if performing real hardware access. For example, you can pass an ISO image as a parameter to QEMU, and the operating system running on the emulated computer will understand that a physical CD-ROM has been inserted into a CD drive.

When QEMU runs on top of KVM, KVM handles CPU and memory virtualization using hardware acceleration, while QEMU handles device emulation. This combination results in near-native performance for virtual machines, as CPU operations are executed directly on the physical CPU, while QEMU focuses on emulating virtual devices like storage and network interfaces.

As Proxmox combines QEMU with KVM, it functions similarly to a Type 1 hypervisor even though QEMU technically runs in userspace.

As presented below, in Proxmox, QEMU handles the emulation of devices like virtual disks and network cards and manages the virtual machine's I/O. Meanwhile, KVM provides hardware-assisted CPU and memory virtualization, enabling the guest operating system to run directly on the physical hardware.

Containers support

If you're familiar with Docker containers, you might assume that Proxmox's native container support refers to Docker. However, that's not the case. Proxmox natively supports Linux Containers (LXC) but does not support Docker containers.

The goal of LXC is to create an isolated environment as close as possible to a standard Linux installation but without the need for a separate kernel. This means that LXC is a set of tools and APIs operating in user space rather than kernel space. The kernel space is where the operating system's core operates (handling low-level tasks like managing hardware, security, and system resources), while user space is where programs and applications run.

The Linux kernel itself has features like namespaces, cgroups, and seccomp that allow for the isolation and resource management of processes. LXC acts as an interface, making it easier for users to access and use these features to create isolated environments (containers). Linux users can easily create and manage system or application containers through a powerful API and simple tools.

Currently, LXC uses the following kernel features to contain processes:

  • Kernel namespaces (ipc, uts, mount, pid, network and user). Namespaces isolate various system resources such as processes, networks, and filesystems.
  • AppArmor provides mandatory access control to restrict container activities.
  • Seccomp policies reduce the attack surface by limiting available system calls.
  • Chroots (using pivot_root) to isolate the container's filesystem.
  • Kernel capabilities  grant specific permissions to processes,
  • cgroups (control groups) control resource usage through processes within containers.

Let's focus on Apparmour, cgroups.

AppArmour (Application Armor) is a Linux Security Module (LSM). It is an alternative to SELinux (Security-Enhanced Linux). It enhances the security of Linux systems by confining individual programs (processes) to a limited set of resources and actions they can perform.

AppArmor is a Mandatory Access Control (MAC) system designed to enforce process security policies. It uses security profiles to define application access rules, helping to make them more secure. These profiles specify what resources a program can interact with—such as which files it can read or write, which network ports it can use, and which system calls it can make.

When an application is launched, AppArmor checks its associated security profile. This profile outlines the resources the application can access and the actions it is permitted to perform. If the application attempts to access resources or perform operations outside the scope of its profile, AppArmor will block the action and log the event for auditing.

Cgroups (Control Groups) are a core feature of the Linux kernel that restricts, monitors, and isolates process resource utilization, such as CPU, memory, and I/O bandwidth, ensuring no individual process monopolizes excessive resources on the host system.

In Proxmox VE (Virtual Environment), cgroups play a crucial role in resource allocation, isolation, and management for virtual machines (VMs) and containers (LXC). They ensure that each workload gets its fair share of system resources without one process monopolizing the hardware. Proxmox's KVM-based virtual machines also utilize cgroups to limit and isolate their resource usage.

With cgroups, Proxmox administrators can ensure that workloads run within defined resource limits, prioritize critical applications, and maintain system stability, especially in environments with multiple users or workloads.

Services

A Proxmox VE node runs a couple of services for proper operation. Let's describe some of them.

"pveproxy" is a gateway listening on port 8006 that receives REST API requests from clients. It forwards HTTP requests to internal backend services, such as pvedaemon. It also provides access to Proxmox VE's web-based management interface via HTTPS, authentication, and reverse proxy functionalities.

"pvedaemon" is the central daemon for managing virtualized resources such as virtual machines, Linux containers (LXC), storage, and networking. This service interacts with other Proxmox components to start, stop, migrate, or configure VMs and containers. It is the de facto REST API server that handles all API calls.

"pvestatd"  is responsible for monitoring and reporting system status. It queries the status of all resources (VMs, Containers, and Storage), and sends the result to all cluster members.

"pve-ha-lrm" is the High Availability (HA) local resource manager. It ensures the continuous availability of virtual machines and containers by monitoring and managing their status on local nodes. If a node or resource fails, pve-ha-lrm triggers recovery actions like migration or restarting VMs/containers to keep services running with minimal downtime.

"pve-cluster" is essential for managing and maintaining a Proxmox cluster, which consists of multiple Proxmox nodes working together to deliver high availability (HA), resource management, and fault tolerance. It enables seamless communication between nodes, allowing them to share configuration data and function as a unified, cohesive system.

For more information about Proxmox VE Services, refer to:

https://pve.proxmox.com/wiki/Service_daemons#pvestatd

User Tools

Let's verify some essential command-line user tools in Proxmox VE.

"qm" is a tool to manage QEMU/KVM virtual machines on Proxmox VE. You can create and destroy virtual machines and control their execution (start/stop/suspend/resume). Additionally, you can use qm to set parameters in configuration files and create and delete virtual disks.

"pct" is a command-line tool for managing containers. It enables you to create or destroy containers and control their execution (start, stop, reboot, migrate, etc.). It can also be used to set parameters in a container's config file, such as the network configuration or memory limits.

"pvesm" (Proxmox VE Storage Manager) is a command-line tool that handles storage configurations and simplifies managing and maintaining storage resources in Proxmox VE. The poem tool supports a variety of storage backends, including but not limited to local storage (directories, ZFS, LVM), NFS, iSCSI, Ceph, CIFS, GlusterFS, and ZFS over iSCSI.

"pvecm" (Proxmox VE Cluster Manager) manages and orchestrates cluster operations. It can create a new cluster, add new nodes, obtain status information, managing quorum, and perform other related tasks.

"pveum" (Proxmox VE User Manager) manages user accounts, roles, and permissions. It enables administrators to handle user authentication and authorization while enforcing fine-grained access control over Proxmox resources like virtual machines (VMs), containers, and storage. This tool is crucial for configuring multi-user environments, particularly in large or shared deployments, ensuring appropriate access and security

"pveceph" provides tools for integrating and managing Ceph storage clusters. E. It enables administrators to deploy, configure, and monitor Ceph storage resources directly from the Proxmox interface. Proxmox VE uses Ceph as a backend storage solution for virtual machines (VMs), containers, and other data types.

"ha-manager" (High Availability Manager) is responsible for ensuring high availability (HA) of virtual machines (VMs) and containers within a cluster. It provides automated failover mechanisms, monitoring, and event-handling capabilities.

"pve-firewall" is the firewall management system. It allows you to configure firewall rules for all hosts in the cluster or define rules for virtual machines and containers. The Proxmox firewall operates using iptables or nftables, providing more granular control over network traffic. It supports host-based firewalls for each Proxmox node and VM or container-based firewalls for individual virtual machines or containers.

Graphical User Interface

The Proxmox VE (Virtual Environment) Graphical User Interface (GUI) is a web-based management platform that offers an intuitive and centralized way to oversee all aspects of a Proxmox virtualized environment. It equips administrators with robust tools to efficiently manage virtual machines, containers, storage, networks, and clusters from a single interface.

 

Some of the main features are:

  • Seamless integration and management of Proxmox VE clusters. The Proxmox VE GUI allows for centralized management of multi-node clusters, enabling administrators to add, monitor, and manage multiple nodes effortlessly.
  • AJAX technologies for dynamic resource updates. The interface dynamically updates resources and system status in real time without requiring a page refresh, improving the user experience and responsiveness.
  • Secure access to all Virtual Machines and Containers via SSL encryption (HTTPS), protecting the confidentiality and integrity of communication
  • Fast search-driven interface, capable of handling hundreds and probably thousands of VMs. The interface has a robust search function that helps administrators quickly find and manage virtual machines, containers, and other resources, even in large-scale environments.
  • Secure HTML5 console or SPICE (Simple Protocol for Independent Computing Environments) for VM access. It enables users to interact with VMs directly through the web interface without additional plugins.
  • Role-based permission management for all objects (VMs, storages, nodes, etc.).
  • Multiple authentication methods, including local users, Microsoft Active Directory (MS ADS), LDAP, and other directory services, are supported, making it easier to integrate with existing authentication infrastructures.
  • Two-factor authentication (2FA). Proxmox VE supports two-factor authentication (2FA) through methods like OATH (Open Authentication) and Yubikey, adding an extra layer of security when accessing the GUI.
  • It is based on the ExtJS 7.x JavaScript framework, which provides a responsive, feature-rich interface with advanced UI components, ensuring a smooth and modern user experience.

For more information, access:

https://pve.proxmox.com/wiki/Graphical_User_Interface

Migration from VMware

Since release 8.2, the integrated Import Wizard for VMware ESXi VMs has enabled us to migrate VMs from VMware ESXi to Proxmox directly from the Proxmox web interface.

The step-by-step is available at this link:

https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE#Automatic_Import_of_Full_VM

Another migration option is to use OVF templates. Exporting the virtual machine in the OVF format and then re-importing it within Proxmox is possible, so in that case, we aren't only migrating the virtual hard disk and the VM settings.

Another possibility is to clone the operating system with solutions like Clonezilla. We can create an image of the operating system and restore this image within the Proxmox VM.

The other option in the future can be using Veeam Backup and Replication. Since Veeam already allows portability of backups between different hypervisors, restoring a backup from a vSphere VM directly as a Proxmox VE VM will likely be possible. Let's wait!

 

References

https://www.proxmox.com/en/proxmox-virtual-environment/overview

https://www.qemu.org/docs/master/about/index.html

https://linuxcontainers.org/

https://cloudzy.com/blog/qemu-vs-kvm/

https://apparmor.net/

https://docs.kernel.org/admin-guide/cgroup-v1/cgroups.html

Thanks for all this detail Luiz.  Looking to get this set up on an older NUC.  Hopefully test it with VBR too. 😎


Thanks for sharing


Comment