Skip to main content

This is a story that is certain to have happened in a similar way to many of you.
It is a story about a topic that is unfortunately becoming more and more current and frequent for those in the sys/backup admin job: managing the response to a ransomware attack.

It was a Friday morning, the business week was drawing to a conclusion, and outside a pleasant spring sun was already warming the earth.

As I arrived at the office, I was immediately notified that a client whose virtual infrastructure we were managing was no longer able to access some servers.
In most cases, so many reports turn out to be minor issues, sometimes all it takes is a few tiny actions and everything gets sorted out.…

 

 

But this time, I don't know why, I had a bad feeling: we connect to the vCenter, it responds, fine, but as soon as we log in we see that most of the VMs are in an "inaccessible" state..what could have happened? The storage seems ok, the datastores are mounted..looking here and there we take a browse to a VM's directory and what do we find? A ransom note, LOCKBIT 3.0..HELP!!!

We raise the alarm, starting to alert all relevant departments.

 

 

Most of the VMs were encrypted, the ransomware was still running around trying to finish its work..we disconnect the storage from the hypervisors, trying to limit the damage.

The second thought obviously went there: what about backups!

We quickly connect to the NAS that was acting as a simple NFS repository, at first sight Veeam files look good..ok, with some extra peace of mind (but not too much) we decide to shut down the NAS to prevent the damage from spreading to our last lifeline. We will think about that in a while.

The battle to clean up the environment was hard, hours and hours of work spent, inversely proportional to the hours of sleep..the first day ended at dawn, and so did the next day.

It's been a long time since I've had such feelings, anxiety mixed with adrenaline, the fear of not getting it done, the excitement of discovering a new log, the satisfaction of putting another piece into solving the problem.

I'll spare you all the details, partly because many things can only be told well by the cybersecurity team that attended..what I can tell you is that on the third day the customer had a new management network, new vCenter, new hosts and Esxi installed from scratch..and of course new VBR and all VMs restored from backup..another team work accomplished!

 

 

This time we were lucky, but it doesn't always go well..the backup infrastructure was pretty standard, meaning with basic configurations, when the standard by now should be those that respect the 3-2-1-1-0 golden rule..but you know, for some reason or another, we are unfortunately not always put in a position to be able to do our job at our best.

Moral of the story: we should always try to work following best practices and address the security issue from the design phase, always carrying it forward as a mantra..but most importantly, never underestimate the importance of backups, they can always save us!

Every day is backup day! 💚

 

Great stuff, @marco_s ! :)


Comment