Skip to main content

Veeam Agent for Linux: Managed Deployment, Protection Groups, and Bare Metal Recovery

  • March 30, 2026
  • 5 comments
  • 47 views

eblack
Forum|alt.badge.img+1

Physical Linux servers are everywhere in environments that use Veeam. File servers, domain controllers, application servers, database hosts, monitoring appliances, NTP servers -- the list of workloads that run on bare metal Linux and never get virtualized keeps growing. And yet the Windows Agent gets most of the documentation, most of the community posts, and most of the institutional knowledge. This article covers the Linux Agent from deployment through protection group policy to bare metal recovery -- the parts that are actually different from the Windows Agent, the Linux-specific gotchas that bite people, and the recovery process that the documentation treats as obvious but really isn't.

Three Modes, Three Different Management Experiences

Veeam Agent for Linux runs in one of three modes depending on how you deploy it. The mode determines how the agent is configured, how backup jobs are managed, and what recovery options you get. Pick the right one before you touch a package manager.

 

 

Standalone

VBR-Managed

VSPC-Managed

Management

Managed on the server itself -- CLI or ncurses UI via veeam

Policy configured in VBR console

Policy managed from VSPC

VBR required

No

Yes

Yes (via tenant VBR)

Centralized policy

No

Yes

Yes

VBR console visibility

No

Yes

Per-company job visibility

Protection groups

No

Yes

Yes

Backup copy integration

No

Yes

Yes

RBAC

No

Yes (VBR roles)

Yes (VSPC quota + billing)

Multi-tenant

No

No

Yes

 

Standalone mode makes sense for a single server or small clusters in a network segment that can't reach VBR. Everything else should run VBR-managed or VSPC-managed. A standalone agent is invisible to your central backup infrastructure -- you find out it stopped working when you need to restore something, not before.

Kernel Module vs. Nosnap: The Decision That Determines Everything Downstream

This is the most important technical decision in a Linux Agent deployment. Veeam Agent for Linux ships as two distinct packages with fundamentally different snapshot and CBT mechanisms.

 

veeam (kernel module)

veeam-nosnap

Full RAM-based CBT -- tracks changed blocks continuously

No kernel module dependency

Volume-level image backup for all supported filesystems

Works on cluster nodes where snapshot drivers conflict with cluster software

Fast incrementals after initial full

Works on locked-down or hardened OS builds

LVM snapshot support

LVM logical volumes and BTRFS subvolumes get native snapshots

Full bare metal recovery from Veeam Recovery Media

No RAM-based CBT -- incrementals are significantly slower

Requires kernel headers / DKMS, or pre-built packages from Veeam repo

Non-LVM/BTRFS volumes: file-level backup only in snapshot-less mode

CBT maps lost on reboot or module unload -- first incremental re-reads all blocks

Still requires free LVM extents for LVM snapshot creation

Conflicts with specific third-party snapshot drivers

Recovery media for BMR must come from the pre-built Veeam ISO

Secure Boot requires Veeam public key enrollment via mokutil

 

 

Use the kernel module package everywhere you can. The nosnap package exists for cluster nodes using shared storage, systems where kernel-devel and compiler packages can't be installed by security policy, and environments where the kernel module conflicts with existing software. Don't use nosnap just because it's simpler to deploy -- the cost is real: no CBT means every incremental re-scans for changes, which is slower and heavier on I/O.

blksnap vs. veeamsnap -- Which Module You're Getting

Veeam Agent for Linux v6 introduced blksnap alongside the existing veeamsnap module. Both serve the same purpose -- snapshot creation and CBT -- but they target different kernel generations. Here's the actual version map, because the numbers are different depending on whether you're using pre-built packages or DKMS:

veeamsnap -- the original module. Supports kernels from 2.6.32 up to 5.18. Pre-built binary packages are available for older RHEL/CentOS 6 and 7 kernels. The module is deprecated for kernels 6.8 and later and is not being developed further.

blksnap -- the current module. Pre-built binary packages require kernel 5.3.18 or later. For DKMS builds, blksnap works from kernel 5.10 onward. There's an overlap zone between 5.10 and 5.18 where both modules work -- on those kernels, blksnap is preferred since veeamsnap is no longer receiving updates. For anything below 5.10, use veeamsnap via DKMS if pre-built packages aren't available for your distro version. Check kb2804 on veeam.com for the per-distro module default before you install manually -- it's the authoritative reference and Veeam keeps it current.

When you install from VBR centrally via a protection group, VBR picks the best available module automatically. When you install manually, the Veeam package prefers pre-built binary packages when they're available and falls back to DKMS.

Conflicting Snapshot Drivers

The Veeam kernel module won't load if any of the following are installed: hcpdriver, hcdriver, snapapi26, snapapi, snapper, dattobd, dattobd-dkms, dkms-dattobd, cdr, cxbf. Check for all of them before installing. dattobd is the most common conflict -- it lives in environments that ran Datto agents previously. Remove it cleanly before installing Veeam Agent. A partial install against a conflicting module produces errors that look like a broken Veeam package but are actually a kernel symbol conflict.

 

Secure Boot

Secure Boot requires every kernel module to be signed by a key the firmware trusts. Veeam's modules aren't signed by your distro vendor, so you need to enroll Veeam's public key via mokutil before the module will load. For pre-built binary packages, install the veeam-ueficert package and enroll the Veeam-provided key -- it covers all Veeam-compiled modules. For DKMS builds, DKMS generates its own signing key and mokutil enrollment uses that key instead. Either way, key enrollment requires a reboot and UEFI approval during the boot cycle, so plan for downtime. If Secure Boot is enabled, your only option for bare metal recovery media is the pre-built ISO from Veeam's website -- custom patched recovery media won't load.

VBR-Managed Deployment: Protection Groups and Auto-Discovery

When you manage Linux agents through VBR, the protection group is your fundamental unit of organization. Every Linux server you manage through VBR belongs to exactly one protection group. The group defines discovery method, deployment credentials, and rescan schedule.

Protection Group Types

 

Group Type

Discovery Method

Best For

Individual computers

Manual list -- FQDNs or IPs entered directly

Small fixed server lists, isolated segments, or servers that don't fit an AD or CSV pattern. VBR connects via SSH to deploy the agent.

Active Directory OU

AD query -- computers in specified OU

Domain-joined Linux servers managed via SSSD or Winbind. New servers auto-added as they join the OU. Works for mixed Windows/Linux OUs -- VBR detects the OS and installs the correct agent.

CSV file

Comma-separated list of FQDNs/IPs

Non-domain environments where you maintain a server inventory externally (CMDB, Ansible inventory). VBR re-reads the CSV on each rescan -- additions to the file auto-deploy on the next scan cycle.

Pre-installed agents

Agent registers itself with VBR

Servers where VBR-initiated SSH isn't permitted. Install the agent manually or via config management, configure it to call home to VBR, and VBR places the server in this predefined group.

 

Pre-Installed Agent Group Limits

The pre-installed agent group is a predefined group -- you can't apply a backup policy directly to it the way you can with other types. Servers that self-register appear there, and you cover them by creating a backup job targeting individual computers from that group. If you want centralized policy-driven coverage with auto-deployment, use one of the other group types.

 

SSH Credentials and Privilege Escalation

VBR deploys the agent over SSH. The credentials need enough privilege to install packages and start services. For non-root SSH accounts, Veeam supports sudo escalation. The cleanest option at scale is a dedicated service account with NOPASSWD sudo for specific commands:

/etc/sudoers.d/veeam-agent

# Replace veeamsvc with your actual service account name

veeamsvc ALL=(ALL) NOPASSWD: /usr/bin/veeamconfig, \

  /usr/bin/apt, /usr/bin/apt-get, \

  /usr/bin/dnf, /usr/bin/yum, /usr/bin/rpm, \

  /usr/bin/zypper, \

  /usr/bin/systemctl, /usr/sbin/service

 

Creating the Protection Group and Backup Policy

  1. In VBR, go to Inventory > Physical Infrastructure. Right-click and select Add Protection Group. Choose your group type and enter the server list or AD OU path. Supply SSH credentials. VBR validates connectivity before letting you proceed.
  2. On the Options page, set the rescan schedule. This controls how often VBR queries the group for new members and deploys agents to any it finds. Daily matches most provisioning cadences.
  3. Finish the wizard. VBR performs an initial scan, connects to each server via SSH, and installs the agent. Watch the Last Status column. Servers showing Warning almost always have an SSH connectivity or credential issue -- click the server and check the session log for the specific error.
  4. Create an Agent Backup Job under Jobs > Agent Backup > Add. Select Individual computers, pick your protection group as the source, configure scope (entire machine, volumes, or files), retention, and repository. VBR pushes the policy to agents on the next rescan cycle.

 

LVM Snapshot Behavior and Space Planning

Most production Linux servers use LVM, and LVM snapshot behavior under Veeam Agent is worth understanding before your first backup runs.

When a backup job runs on a system using the kernel module package, Veeam creates an LVM snapshot of each logical volume being backed up. The snapshot requires free space in the volume group -- unallocated extents that haven't been assigned to any logical volume. From 10 to 20 percent of the volume's occupied space is the documented guideline for snapshot headroom -- this comes directly from Veeam's release notes. A 100 GB logical volume with 60 GB occupied needs at least 6 to 12 GB of free extents in its volume group. For high-write workloads like databases, plan for more.

Veeam doesn't warn you that snapshot creation will fail before the job starts -- it fails during the job. Pre-check your VG free space with vgs and lvs before deploying the agent on fully-allocated servers.

CBT Maps Don't Survive Reboots or Module Unloads

The kernel module's CBT maps live in RAM. Every time a server reboots, or the kernel module is unloaded for any reason, the CBT map for every volume is reset. The next incremental backup re-reads all data added to the backup scope to detect changed blocks -- it produces a correct incremental, but it takes longer and generates more I/O than a normal post-CBT incremental. This is expected behavior per Veeam's docs. Plan backup windows for servers that reboot regularly on a known cycle.

 

Application-Consistent Quiescing on Linux

Linux doesn't have VSS. Veeam Agent achieves application consistency through pre- and post-snapshot scripts defined per backup job. The pre-script runs before Veeam takes the LVM snapshot to quiesce the application. The post-script runs after the snapshot is taken to resume normal operations.

Pre-snapshot script -- PostgreSQL

#!/bin/bash

sudo -u postgres psql -c "CHECKPOINT;"

sudo -u postgres psql -c "SELECT pg_start_backup('veeam', true);" 2>/dev/null || true

Post-snapshot script -- PostgreSQL

#!/bin/bash

sudo -u postgres psql -c "SELECT pg_stop_backup();" 2>/dev/null || true

Pre-snapshot script -- MySQL / MariaDB

#!/bin/bash

mysql -u root -p"${MYSQL_ROOT_PASSWORD}" -e "FLUSH TABLES WITH READ LOCK; FLUSH LOGS;"

 

Configure scripts per backup job under Advanced Settings > Scripts. A non-zero exit from the pre-script aborts the backup by default. For database workloads, keep this default. A failed quiesce producing an aborted job is the right outcome -- a quietly crash-consistent database backup that succeeds is the dangerous one.

Bare Metal Recovery from Veeam Recovery Media

This is the section where most Linux Agent documentation gets vague. The recovery process works, but it has enough Linux-specific nuance that going through it for the first time during an actual incident is a bad idea. Test it before you need it.

Getting the Recovery ISO

Pre-built ISO from Veeam's website: Available per-distro and architecture from the Additional Downloads section of the Veeam Agent for Linux product page. This is the only valid option if Secure Boot is enabled. Test that the recovery kernel has drivers for your NIC and storage controllers before you need to use it in production.

Custom patched recovery media: Run veeam on the agent host, press M for Miscellaneous, and select Patch Recovery Media. This creates a recovery ISO using the running kernel of the protected server -- same drivers, same kernel modules. Better for physical servers with specialized hardware. Not available under Secure Boot.

Test Before You Need It

Boot the recovery media in your environment and verify it can see your network interfaces and storage controllers. If the recovery kernel doesn't have the driver for your NIC, you can't reach the backup repository. Finding this during a DR test is a 30-minute fix. Finding it during an actual recovery is a much worse morning.

 

Bare Metal Recovery Sequence

  1. Boot from Veeam Recovery Media. Attach the ISO via iDRAC, iLO, or IPMI for physical servers, or as a mounted ISO for VMs. When prompted, choose whether to start the SSH server -- enabling SSH lets you drive the recovery remotely, which is more practical than console-only for lengthy restores. Note the IP address shown on screen.
  2. Select Restore Volumes. The main menu offers Restore Volumes, Restore Files, and Exit to Shell. Choose Restore Volumes for bare metal recovery. Restore Files is available if you only need specific files and don't want to rebuild the full disk layout.
  3. Configure the backup repository connection. Provide the VBR server address, port, and credentials. The recovery environment connects to VBR to browse available restore points. If DHCP didn't assign a correct address for your restore network, configure IP settings via the shell before proceeding.
  4. Select the restore point and disk mapping. Choose the backup and restore point. The wizard shows the original disk layout and asks how to map volumes to the target disk(s). Identical hardware -- accept automatic mapping. Different disk size -- manually map each volume and resize partitions as needed. LVM layout is preserved automatically when restoring to a volume group that fits.
  5. Run the restore and handle the bootloader. Veeam restores volume data block-by-block from the backup. When complete, it attempts to repair GRUB2. On physical-to-virtual or hardware-changed restores, bootloader repair sometimes needs manual help -- if the restored server doesn't boot, drop to the shell from recovery media, mount the restored root partition, and run grub2-install or update-grub manually. This is the most common post-restore step on hardware-changed restores.
  6. Regenerate initramfs on hardware-changed restores. When restoring to different hardware or from physical to virtual, the initramfs may not contain drivers for the new storage or network controllers. Boot into the restored system and run dracut -f (RHEL-family) or update-initramfs -u (Debian/Ubuntu) to regenerate it with the correct drivers. Also verify lvm.conf filter settings if LVM volumes aren't being found -- a common issue when restoring to VMs where disk device names differ from the source.

 

Granular Recovery Options

Bare metal recovery is the nuclear option. For most day-to-day recovery needs, you want something faster.

File-level restore mounts the backup as a filesystem from VBR and exposes it via a network share or local mount point on the target server. From VBR console: Backups > Disk (Agent) > right-click restore point > Restore guest files > Linux. VBR mounts the backup read-only and presents a file browser. The running server isn't touched at all.

Volume-level restore writes a specific volume from the backup back to the running server -- useful when a volume is corrupted but the rest of the system is fine. The volume being restored is unmounted for the duration. You can't hot-restore an actively mounted system volume like /. For the root volume, bare metal recovery is the path.

What You've Completed

  • You understand the three deployment modes and when to use each. Standalone agents are invisible to central infrastructure -- anything you actively care about should be VBR-managed or VSPC-managed.
  • You've chosen between the kernel module package and nosnap. Use the kernel module everywhere you can. The two kernel modules are veeamsnap (up to kernel 5.18, being deprecated) and blksnap (5.10+ for DKMS, 5.3.18+ for pre-built binaries). Check kb2804 on veeam.com for the right module for your specific distro and kernel before installing manually.
  • Protection groups define discovery and deployment. Individual computers, AD OU, CSV, and pre-installed agent groups cover every topology. Pre-installed agent groups have limited policy capabilities -- use other types for centralized automated job assignment.
  • LVM snapshot creation requires 10 to 20 percent of occupied volume space as unallocated free extents in the volume group. Veeam doesn't warn you before the job -- it fails during it. Pre-check with vgs and lvs. CBT maps are RAM-based and reset on every reboot or module unload -- first post-reboot incremental re-reads all blocks.
  • Application consistency on Linux uses pre- and post-snapshot scripts. Non-zero exit from the pre-script aborts the backup by default -- for database workloads, keep this default. A failed quiesce should fail loudly, not produce a quiet crash-consistent backup.
  • Test your recovery media before you need it. Boot it, verify NIC and storage controller visibility, and run a practice restore. The two most common post-restore issues on hardware-changed systems: bootloader repair (grub2-install) and initramfs regeneration (dracut -f for RHEL-family, update-initramfs -u for Debian/Ubuntu). Both are straightforward once you know to expect them.

5 comments

Chris.Childerhose
Forum|alt.badge.img+21

Very nice write-up Eric. 👍🏼

 
 
 

eblack
Forum|alt.badge.img+1
  • Author
  • Influencer
  • March 30, 2026

Very nice write-up Eric. 👍🏼

 
 
 

Thanks. 


kciolek
Forum|alt.badge.img+3
  • Influencer
  • March 31, 2026

Nice write-up ​@eblack!


coolsport00
Forum|alt.badge.img+21
  • Veeam Legend
  • March 31, 2026

Nice VAL post Eric! And thanks for sharing the veeam vs nosnap differences. I’ve played with VAL a few yrs ago...and did a writeup myself on here...but never fully took the time to discern the differences between those 2. Well done!


eblack
Forum|alt.badge.img+1
  • Author
  • Influencer
  • March 31, 2026

Nice VAL post Eric! And thanks for sharing the veeam vs nosnap differences. I’ve played with VAL a few yrs ago...and did a writeup myself on here...but never fully took the time to discern the differences between those 2. Well done!

Thanks!