Skip to main content

(World Backup Day sub.) 3 AM, Ransomware on the Wire, and Why We Didn't Pay a Cent

  • March 23, 2026
  • 0 comments
  • 23 views

eblack
Forum|alt.badge.img

 

Veeam Community Hub | World Backup Day 2026 Contest Entry |

 

Working in the managed services space teaches you pretty quickly that good architecture isn't about optimizing IOPS or storage densities. It's about keeping a business alive when things go wrong. That mindset proved its worth late last year during one of the most stressful weekends my team has ever worked.

Our NOC woke me up at 3 AM on a Saturday. Medium-sized regional manufacturing client, three facilities running around the clock, roughly 200 VMs, about 80TB of production data. The client's IT Director was already on the bridge call. A ransomware strain had bypassed their endpoint protection and moved laterally. By the time the anomalies were flagged, the payload had encrypted their core ERP system, SQL databases, and primary file servers. All three production lines were stopped.

The threat actors left a ransom note demanding Bitcoin. They also claimed they had successfully deleted every backup repository.

The IT Director asked the question you never want to hear: are the backups actually gone, and do we need to hire a negotiator?

I told him the backups were safe. We were not going to pay anything. And I was certain of that because I knew exactly what we had built six months earlier.

 

Why the architecture held

 

When we rebuilt this client's DR stack, we migrated to Veeam Data Platform VBR v12 and made one non-negotiable design decision: get backup storage completely off the Windows domain. If your backup infrastructure relies on Active Directory authentication, a single compromised domain admin account can cost you everything. That is not a theoretical risk anymore. It is the playbook.

We replaced their legacy Windows backup servers with a Veeam Linux Hardened Repository built on bare-metal Ubuntu servers using XFS volumes. The XFS fast clone (reflink) capability gives you space-efficient synthetic fulls without the performance hit. But the real reason we built it this way was security. The repository is entirely off the domain. We used single-use credentials during initial configuration. Those credentials are used once to deploy the Veeam Data Mover and are never stored in the backup infrastructure afterward. SSH was disabled post-setup. Physical management was restricted to an out-of-band segment (iDRAC/iLO only), completely isolated from the production network.

Once data is written and the immutability flag is applied, nothing can modify or delete those files until the immutability window expires. Not a domain admin, not a compromised VBR server, not an automated wipe script.

For the offsite copy, we configured a Scale-Out Backup Repository with a Capacity Tier offloading to S3-compatible object storage with Object Lock enforced on the bucket.

 

What the attackers actually ran into

 

Once incident response contained the network, we traced the attack path through the logs. The threat actors had successfully compromised a domain admin account and used it exactly the way you would expect. They ran scripts to locate and wipe backup repositories to maximize their leverage before the ransom demand.

The logs showed them attempting to connect to the Veeam server and delete the backup chains. It did not work. Because the hardened repository uses single-use credentials that are never stored in VBR, there was nothing for the attacker to steal or reuse. Even with full domain admin rights, they had no path to the credentials that would let them touch the repository. The storage refused the delete commands. Every backup chain was intact.

 

Getting production back online

 

With the backups confirmed clean, we started recovery carefully. Restoring infected VMs directly back onto the network is a fast way to reintroduce the same problem you just contained. So before bringing any systems online, we used Veeam Secure Restore. As each backup was mounted, we ran scans using both our configured antivirus engine and YARA rules targeted at the specific ransomware strain we were dealing with. Secure Restore temporarily mounts the backup disks and scans before anything touches the production network. Nothing came back infected. The payload had not gotten into the backups.

With 80TB of data, we were not going to wait hours for a full hydration. Veeam Instant VM Recovery let us mount the Tier 1 workloads, the ERP and the SQL servers, directly from hardened storage to the VMware vSphere hosts. The ERP was back online Saturday afternoon. Production lines across all three facilities resumed the same day.

Over the rest of the weekend, VMware Storage vMotion migrated the data back to primary production storage in the background. Zero downtime, zero disruption to the floor. By Monday morning the client was fully operational.

What I took away from this

 

Zero data loss. Zero ransom paid. The client did not miss a production shift beyond Saturday morning.

My honest advice to anyone building backup infrastructure right now: immutability is not optional anymore. Get your backups off the Windows domain. Build Veeam Linux Hardened Repositories and understand what the single-use credential model actually protects you from. Test your restores before you need them. And design every environment assuming a breach is going to happen because at this point, the question is not whether, it is when.

Tagging Madi Cristil and @safiya for the World Backup Day contest. Happy to answer questions on the architecture in the comments.

 

#veeam  #WorldBackupDay  #VeeamDataPlatform  #CyberResilience

 

@Madi.Cristil - ​@safiya