Solved

synology running linux vm as hardened repo


Userlevel 2
Badge

Can’t find an exact answer to this scenario.

 

Is it possible to run a Linux vm (Ubuntu) on a synology (via VMM), and expose (or give access to) the underlying storage to the vm so that we can use the immutable feature?


example:

1: 40tb storage in one volume

2: ubuntu vm having access to that storage somehow (I’m unfamiliar with vms aim synology)

3: somehow configuring the veeam server to point to the Ubuntu vm storage so that it can use immutability 

icon

Best answer by MicoolPaul 19 May 2022, 00:57

View original

22 comments

Userlevel 7
Badge +20

Yes you should be able to do this. I run some VMs on Synology at home and it works similar to VMware, etc.

Userlevel 7
Badge +6

I’m not sure I have enough info about your infrastructure, but I’ll take a stab at this.  The way that I plan on doing this is by having a virtual linux server at our DR location.  I’m attaching a Synology NAS to our ESXI hosts via ISCSI and then attaching the linux box to the ISCSI device as a RDM disk.  I believe the alternative is to setup ISCSI from the linux server to the NAS directly.  Once you have the device attached, then you should be able to create a XFS volume to be used as the immutable repository.  Then in Veeam, add the linux box as a repository server and configure the immutable repo.  I haven’t actually performed these tasks yet, but it’s on my very short list to setup before I deploy to production and it makes sense in my mind as this is similar to the setup we use for virtual Windows repository servers, but basically just with Linux.  I’m sure others here will have additional input as well, but I hope this helps.

Edit:  After a bit of googling, I realize you’re wanting to run the VM on the NAS (didn’t realize VMM was a thing on Synology).  Seems like you’d just be passing along a local volume to the VM to use as a repo.  Doesn’t seem like it’d be all that different from that perspective.

Userlevel 7
Badge +17

This should be possible, but is it advisable for a productive environment?

At least you have to secure the admin interface of the NAS as good as possible to prevent that the VM or the storage can be manipulated...

Userlevel 2
Badge +1

Running an immutable repo in a VM brings the risk of having the underlying infrastructure (hypervisor) compromised and that will affect the whole repo. This would not be a scenario for production environments.  

Userlevel 7
Badge +20

@JMeixner & @ravatheodor are bang on here that it’s not advisable, and that’s because of the extra risks of hypervisor issues and out of band management exploitation that would then result in console access to your server.

 

Hardened Linux is just setting an attribute on your files that it is immutable, with a date the immutability flag expires. Anyone that’s root level on the system can remove this. And it’s far easier to gain root access at console level.

 

You’d need to take a lot of steps to realistically secure this to make it worth while. It’s certainly easier with physical servers.

 

Have you purchased this already? I’ve found Dell servers sized for repositories cost comparable to Synology + your own disks, the benefit of advanced replacement and field engineer warranties always makes Dell/HPE solutions much better value (not just those vendors but you get the idea!).

Userlevel 7
Badge +6

At least you have to secure the admin interface of the NAS as good as possible to prevent that the VM or the storage can be manipulated...

This is going to be advisable regardless of if the repo server is on the NAS or elsewhere.  If you don’t protect access to the underlying storage, there’s nearly no point to immutability.  And the number of vulnerabilities that QNAP and Synology have to patch for because they are highly targeted make them one of my lesser suggestions for backup repositories.  That and software RAID with it’s own issues that can introduce corruption as well as lack of redundancy.  But if it’s just being used for lab/testing or at least the risks are highly known and advised against, then I guess at least the end user is accepting of the risk.

Userlevel 7
Badge +6

@JMeixner & @ravatheodor are bang on here that it’s not advisable, and that’s because of the extra risks of hypervisor issues and out of band management exploitation that would then result in console access to your server.

 

Hardened Linux is just setting an attribute on your files that it is immutable, with a date the immutability flag expires. Anyone that’s root level on the system can remove this. And it’s far easier to gain root access at console level.

 

You’d need to take a lot of steps to realistically secure this to make it worth while. It’s certainly easier with physical servers.

 

 

This is a REALLY good point.  I’m about to deploy some virtual linux repo’s on a separate host, but you raise a good point about the accessibility to the console should the vcenter/host access be compromised.  Going to have to give that one some further thought.  Still better than no immutability at all though.

 

Have you purchased this already? I’ve found Dell servers sized for repositories cost comparable to Synology + your own disks, the benefit of advanced replacement and field engineer warranties always makes Dell/HPE solutions much better value (not just those vendors but you get the idea!).

 

This.  This is my number one solution.  I’ve been doing purpose-built Dell servers for a while, be it R440, R540, T640, or even a T340 in super small scenarios.  If you need even more space, using a lower end server and a PowerVault array for gobs of storage works great as well but the T640 can hold up to 18 disks, and those disks are up to 14TB or 16TB (or more even?) each if you need….you do the math.  It makes a TON of sense to use these, plus there’s a you’re then using battery backed cache’s on the RAID accelerator, hardware RAID, enterprise hardware support, and a set lifecycle (7 years max for instance with Dell warranties) instead of buying a device and leaving it to sit for 10 or 12 years, occasionally having to order a new drive that takes a week or two to get when one fails and no set end of life.

Userlevel 2
Badge

I was parting together a Dell power edge, but once you start adding disks (at least through the dell configuration tool) the price jumps. We’re working with around $20k max to my understanding. If we didn’t need immutable backups, the synology would definitely work best for us. But the sounds of it, we should be using a physical Linux box (perhaps still iscsi mapped to the synology with restrictions), but a physical box all the same. Might have to go back to the drawing board on this one 

Userlevel 7
Badge +6

I was parting together a Dell power edge, but once you start adding disks (at least through the dell configuration tool) the price jumps. We’re working with around $20k max to my understanding. If we didn’t need immutable backups, the synology would definitely work best for us. But the sounds of it, we should be using a physical Linux box (perhaps still iscsi mapped to the synology with restrictions), but a physical box all the same. Might have to go back to the drawing board on this one 

 

How many and what size of disks are you needing out of curiosity?

Userlevel 2
Badge

We parted out a synology through a partner: 12x3.84tb ssd raid-6 = roughly 38tb

Userlevel 7
Badge +20

SSD is gonna be the thing bumping the price there, depending on demand is whether you could get away with read intensive to drag costs down a bit on the Dell side…

 

What grade SSDs are they? What’s the warranty and performance?

 

You’re more likely to have RAID issues from my experience with Synology/QNAP type devices, especially if you did consider using the iSCSI route. Again the problem with iSCSI is you could compromise the Synology and remote wipe the LUN, then the immutability meant nothing. Direct Attached Storage is great in this regard.

 

If you absolutely need SSD, I hope you’re getting a 10Gbps or faster NIC in the Synology and the CPU/RAM to deliver the IO. I’m fully aware I sound like I’m just bashing Synology, and I don’t want to sound like that. They absolutely have a place in market but SSD high end performance isn’t their primary application in my experience. They typically use weak, low frequency & low core count processors. I take it you’re looking at an FS series Synology? Those seem best suited to the task out of the offerings. What disks? Something like the Samsung PM1643s?

 

Curve ball, you may find a vendor offering a warranty backed refurb of either current or -1 generation, those SSDs are normally far cheaper but much more durable. Then I’d ensure an extra disk’s parity to be safe.

 

If you go down the Synology route, I suggest giving yourself extra headroom for failure tolerance, completely isolating the device and seeing if it can be presented to a server via SAS/Thunderbolt for example, and using on a decent spec’d server. Then restrict all access and only allow outbound syslog or email for alert monitoring as an example.

Userlevel 2
Badge

Now that I’m going through the Dell website on something other than my phone, the pricing isn’t too much off if we did 6x8TB SAS HDD (32TB rather than 38TB but still within our requirements)… Plus Dell’s ProPlus support is amazing (from my previous job experience) where as Synology doesn’t offer that level of support to my knowledge.. We will definitely be going back to the drawing table I think to look around.

Userlevel 7
Badge +20

Yep the hard drive 4hour SLA is the key selling point for me, that’s not exclusive to Dell of course. But I had a client in the outskirts of London that HPE could never get disks to in 4 hours, and I thought, if you can’t get disks to customer in 4 hours there, how are you gonna deliver in a rural location?

 

Please bounce ideas off is here in the community, I love this kind of stuff and it’s not a small amount of money to write off if it’s wrong!

 

 

Userlevel 7
Badge +6

Unless you’re talking about some very fast backup time requirements, I’ve found SSD’s to be pretty unnecessary for backup repo’s.  I used to use them for SSD caching on Synology NAS’s, but decided that 10Gb was more necessary than the SSD’s.  As Michael said, they have a place, but unless you’re doing some fast synthetic operations or are expecting high change rate on your VM’s, I’ve found it to not really be necessary.  And Enterprise SSD’s, which would be recommended over the likes of something like a Samsung Pro which is good, but not perfect are a lot more. 

I just put together an R540 with 8x 8TB 7200k NLSAS drives in RAID 6 (48TB usable), 32GB of RAM, a Silver 16 Core Proc, and BOSS card for the OS, and no OS included and 10Gb networking, and my pricing is something like $15k.  If I was to swap out to 3.84TB SSD’s (Read Intensive 1DWPD, 6Gb SATA, double the price for 12Gb SAS) like you specified, then I’m looking at at $45k.  For most/all of my clients, those SSD’s aren’t necessary at 3x the price.

Userlevel 7
Badge +6

Yep the hard drive 4hour SLA is the key selling point for me, that’s not exclusive to Dell of course. But I had a client in the outskirts of London that HPE could never get disks to in 4 hours, and I thought, if you can’t get disks to customer in 4 hours there, how are you gonna deliver in a rural location?

I have a client in rural Kansas….a hospital.  I’m pretty sure they opted for the 4-hour mission critical ProSupport.  Now granted, parts will never be there in 4 hours, but the 4 hour window is for arriving at a solution, not resolving the issue.  Still will get them parts faster than Next Business Day.

Userlevel 7
Badge +6

Now that I’m going through the Dell website on something other than my phone, the pricing isn’t too much off if we did 6x8TB SAS HDD (32TB rather than 38TB but still within our requirements)… Plus Dell’s ProPlus support is amazing (from my previous job experience) where as Synology doesn’t offer that level of support to my knowledge.. We will definitely be going back to the drawing table I think to look around.

 

Yep, you’ll find them to be pretty competitive, and much more reliable with much better support.  My plan is to never buy another NAS for a backup repo for my clients.  And yes, ProSupport is awesome!  Synology support is decent, but it’s not enterprise level support, because they’re not enterprise-level devices IMO.

Userlevel 2
Badge

And just to clarify, running a Linux vm on esxi (on dell hardware for example) would be similar to running the Linux vm on the synology? I.e additional attack vectors (hyper visor and the vm)?

Userlevel 7
Badge +17

And just to clarify, running a Linux vm on esxi (on dell hardware for example) would be similar to running the Linux vm on the synology? I.e additional attack vectors (hyper visor and the vm)?

Correct.

For a hardened repository a physical server and no VM is advisable.

Userlevel 7
Badge +13

And just to clarify, running a Linux vm on esxi (on dell hardware for example) would be similar to running the Linux vm on the synology? I.e additional attack vectors (hyper visor and the vm)?

Correct.

For a hardened repository a physical server and no VM is advisable.

100% agree: even if it is technically possible, when you take hardened seriously, then you need a dedicated server hardware. So you can minimize attack vectors.

Userlevel 7
Badge +7

And just to clarify, running a Linux vm on esxi (on dell hardware for example) would be similar to running the Linux vm on the synology? I.e additional attack vectors (hyper visor and the vm)?

Correct.

For a hardened repository a physical server and no VM is advisable.

100% agree: even if it is technically possible, when you take hardened seriously, then you need a dedicated server hardware. So you can minimize attack vectors.

Absolutely. An isolated physical host. I think a while ago Gostev mentioning about disabling remote access and only having root access when physically at the host. 

Userlevel 7
Badge +7

@JMeixner & @ravatheodor are bang on here that it’s not advisable, and that’s because of the extra risks of hypervisor issues and out of band management exploitation that would then result in console access to your server.

 

Hardened Linux is just setting an attribute on your files that it is immutable, with a date the immutability flag expires. Anyone that’s root level on the system can remove this. And it’s far easier to gain root access at console level.

 

You’d need to take a lot of steps to realistically secure this to make it worth while. It’s certainly easier with physical servers.

 

Have you purchased this already? I’ve found Dell servers sized for repositories cost comparable to Synology + your own disks, the benefit of advanced replacement and field engineer warranties always makes Dell/HPE solutions much better value (not just those vendors but you get the idea!).

Totally agree with @MicoolPaul 👌

Userlevel 7
Badge +10

Other curve ball…

 

It’s not recommended to do the VHR (Veeam Hardened Repository) as a VM. Simply overtake the hypervisor and poof.

Comment