Skip to main content

Hi There!

I've shared this story in my VUG Group, Spain,

But I wanted to translate it to English and share it with all of you,
It’s a Personal experience, the latest I’ve got in Prod before changing my role,
and now re-publishing it for the Sysadmin month.

 

Hi,

I want to share with you my latest experience as a IT Infrastructure responsible, just a week before moving into a new company, also in a new role / position.

 

It was August 10th 2022, enjoying my summer holidays before moving into a new company, celebrating my 37th Birthday at the Swimming pool area with my family, and surprise, my phone dings, and I just had a quick look and this came up on the screen:


“We’ve got a Virus in our Systems”

 

I felt goosebumps all over my body, I left my pint of beer (Yes, I love beer) on the table, and I addressed my wife and told he, “I gotta go” ransom!

 

I change my clothes, from my comfortable swimming suite to jeans and a t-shirt, went to the office to get a better understanding of what was going on, and execute the necessary actions to mitigate and solve the issue.

Into my head I had this crazy idea:

“It’s just a joke” my colleagues are doing this to scare me and it’s a surprise cause I’m leaving the company, and it’s a bye Joke… I was wrong.

We confirmed that the virus was a ransomware called <DonkeyF*cker> and hit our servers and spread himself like gunpowder into our vms.

After a huge effort, and coordinating all the IT resources at the office, we finally find the origin of the Infection, patient Zero! Of the encryption, we took it offline, formatted that PC with no regret, and then, Time to recover…

“Surprise!!”

Our Veeam Backup Server got hit and the Backup repository was not accessible!

 

Calm down, clam down, We recovered all our servers!!, Luckily, in our design, our two ESXi Hosts had replicated vms each other, like a “last resort” plan if we’ve lost production backups, replicas and even repos.

 

After a deep breath, we executed our Recovery plan recovering our environment from our replicas, and after a tense minutes, everything started to run as it was supposed to be, getting more confidence on each vm being recovered.

Such a great feeling!!

After less than 60 minutes, we were fully recovered, everyone working as it was supposed, and the Managers / C level where so pleased that we were able to protect and recover the business that quickly and smoothly.

 

I’m sharing this as a horror story, but with a positive ending, but when you are in the middle of the “situation”, it’s a Nightmare.

 

Always have an A, B, C and D plan, Test your backups, and test your DR plans!

 

Luis.

Well done Luis. Yes!...always have multiple layers in place where your data resides. Malicious intenders may not get to all of them, leaving an avenue for recovery. 

Cheers!


Great share! Can you share some more about the original infection? and how did it spread to the backup server?


Great story Luis.  Nice to see you were able to recover from this. 


Great share! Can you share some more about the original infection? and how did it spread to the backup server?

 

I was wondering this as well.  What lessons did you learn?  Actions taken to mitigate this from happening again?  Amazing that it was all back up in an hour!  Congrats!


Great share! Can you share some more about the original infection? and how did it spread to the backup server?

Sure!
The original infection came from a user with “admin rights” who installed a MS Office patched with KMS pico…. yes…. “the pirates of the Caribbean song….”

The DonkeyF*cker just hit us from that workstation, and started to work his way up to the mapped units, of course everything well mapped (I hate mapped units).

The backup server, a vm with windows server 2016 + VBR , got hit cause this user has access to it, and as I said, he was the “master of mapping”.

Lesons learned, get rid of any user with admin rights, no matter what, 
get rid of mapped units, use shortcuts instead, they are easy to copy, easy to deploy, no risk, just got encrypted the shortcut, no access to the data / folder where does point to.

Invest a bit in immutable storage, so even if you got heavily hit, you are totally fine, data just cannot be “encrypted”.

After all, budget is being driven by people who “don't care much” about IT, they just want things to work, no matter what, and that's a mistake.

Hope you enjoyed the little story.

cheers.


Thanks for the followup on this...very insightful.  And good thought on the mapped drives.  That’s super common, but would be a relatively easy fix to help make that hop to the server a bit harder.  Now that we’ve put it out there, I guess we can plan on malware to begin looking for shortcuts to UNC paths at any time, if they don’t already do so.  🤔


Comment