Those AceMagic AM06Pro devices are very interesting and might be a good replacement from the NUCs I got from you that I am still running (all with 64GB of RAM now).
I have my HomeLab going and always testing things Veeam related amongst other things. Some of the setup -
- 4 x Intel NUC Skull Canyons with 64GB RAM each, 2 x NVME drives - 1 for VMware 8.0U2 install and the other for VSAN.
- USB NICs added using VMware Fling - 2 NICs for each NUC.
- USB connected hard drive dock for Intel Optane U2 drives - used in VSAN configuration as Cache drives -- blog about this here - Intel Optane U2
- Synology DS920+ - Used as backup NAS for Veeam and other things like cell phone photos use Drive app.
- Synology DS923+ - Used as main iSCSI device for VMware VMs and testing.
- 2 x Mokerlink switches - 2.5GB to allow me to use the 10GB port on my Bell modem with 8GB Fiber service.
- Veeam VBK and Veeam VCC VMs used for testing.
- Also test both Azure and AWS as well but try not to use up all my credits LOL
Just recently tested and wrote a blog on the Synology Immutable volumes and replication of them as well. Most of my testing as you can imagine is with Veeam, but I also have Runecast as well being part of the vExpert.
Being able to test anything new and help with blogging is what I love my HomeLab for.
I just received a seeded Synology DS723+ yesterday to test out (and blog on). Currently has two 4TB Synology SATA drives, two 400GB M.2 drives and a 10GbE network card. Going to play around with a few things including testing performance with 1GbE vs dual 1GbE vs 10GbE, test performance of SMB vs NFS vs ISCSI, using M.2 caching vs none, etc. Basically, just use and abuse and see what happens. I’ve always felt that using SSD’s was less beneficial to performance than 10GbE, but I want to test it out and see if that gut feeling is correct, and if so, by how much. Probably the thing I look most forward to is running a LHR VM on the M.2 drives using the 4TB drives as the repository. Not all Synology’s NAS’s are capable of setting up the M.2 drives as a separate volume and will only run them as a caching tier to the main disks, but there are a couple models, this being on of them, that can do so without some sort of hacky workaround. I have eyed this sort of setup for a couple clients, but it’ll be nice to set this one up without being under the gun of implementing a project for a client. Additionally, most of the home lab is really home production, but I can move the NAS easily enough into the work lab as well for testing and playing around as well.
I haven’t really looked into Jim’s Synology Immutable volumes, but I know they exist. Curious to see how that pans out as well….might have to look more into this.
I just received a seeded Synology DS723+ yesterday to test out (and blog on). Currently has two 4TB Synology SATA drives, two 400GB M.2 drives and a 10GbE network card. Going to play around with a few things including testing performance with 1GbE vs dual 1GbE vs 10GbE, test performance of SMB vs NFS vs ISCSI, using M.2 caching vs none, etc. Basically, just use and abuse and see what happens. I’ve always felt that using SSD’s was less beneficial to performance than 10GbE, but I want to test it out and see if that gut feeling is correct, and if so, by how much. Probably the thing I look most forward to is running a LHR VM on the M.2 drives using the 4TB drives as the repository. Not all Synology’s NAS’s are capable of setting up the M.2 drives as a separate volume and will only run them as a caching tier to the main disks, but there are a couple models, this being on of them, that can do so without some sort of hacky workaround. I have eyed this sort of setup for a couple clients, but it’ll be nice to set this one up without being under the gun of implementing a project for a client. Additionally, most of the home lab is really home production, but I can move the NAS easily enough into the work lab as well for testing and playing around as well.
I haven’t really looked into Jim’s Synology Immutable volumes, but I know they exist. Curious to see how that pans out as well….might have to look more into this.
Let me know how the SSD volume goes. I am debating changing from Cache to a Volume on my DS923+ and possibly my DS920+ as well to run a couple high performance VMs on that volume instead.
Also, will be blogging about it too.
I just received a seeded Synology DS723+ yesterday to test out (and blog on). Currently has two 4TB Synology SATA drives, two 400GB M.2 drives and a 10GbE network card. Going to play around with a few things including testing performance with 1GbE vs dual 1GbE vs 10GbE, test performance of SMB vs NFS vs ISCSI, using M.2 caching vs none, etc. Basically, just use and abuse and see what happens. I’ve always felt that using SSD’s was less beneficial to performance than 10GbE, but I want to test it out and see if that gut feeling is correct, and if so, by how much. Probably the thing I look most forward to is running a LHR VM on the M.2 drives using the 4TB drives as the repository. Not all Synology’s NAS’s are capable of setting up the M.2 drives as a separate volume and will only run them as a caching tier to the main disks, but there are a couple models, this being on of them, that can do so without some sort of hacky workaround. I have eyed this sort of setup for a couple clients, but it’ll be nice to set this one up without being under the gun of implementing a project for a client. Additionally, most of the home lab is really home production, but I can move the NAS easily enough into the work lab as well for testing and playing around as well.
I haven’t really looked into Jim’s Synology Immutable volumes, but I know they exist. Curious to see how that pans out as well….might have to look more into this.
Let me know how the SSD volume goes. I am debating changing from Cache to a Volume on my DS923+ and possibly my DS920+ as well to run a couple high performance VMs on that volume instead.
Also, will be blogging about it too.
Will do. Like I said, I feel like I’ve not seen much need for caching with backup data. Writes are pretty sequential I believe, and reads probably are too. But I’ll have to do the test and see if my theory is correct.
Hello, thank you for sharing @k00laidIT . I’m considering since many months to rebuild my homelab with minipc and you shared us great informations. Was it easy to add RAM on AceMagic? How did you add storage with SATA or from an external drive?
I just received a seeded Synology DS723+ yesterday to test out (and blog on). Currently has two 4TB Synology SATA drives, two 400GB M.2 drives and a 10GbE network card. Going to play around with a few things including testing performance with 1GbE vs dual 1GbE vs 10GbE, test performance of SMB vs NFS vs ISCSI, using M.2 caching vs none, etc. Basically, just use and abuse and see what happens. I’ve always felt that using SSD’s was less beneficial to performance than 10GbE, but I want to test it out and see if that gut feeling is correct, and if so, by how much. Probably the thing I look most forward to is running a LHR VM on the M.2 drives using the 4TB drives as the repository. Not all Synology’s NAS’s are capable of setting up the M.2 drives as a separate volume and will only run them as a caching tier to the main disks, but there are a couple models, this being on of them, that can do so without some sort of hacky workaround. I have eyed this sort of setup for a couple clients, but it’ll be nice to set this one up without being under the gun of implementing a project for a client. Additionally, most of the home lab is really home production, but I can move the NAS easily enough into the work lab as well for testing and playing around as well.
I haven’t really looked into Jim’s Synology Immutable volumes, but I know they exist. Curious to see how that pans out as well….might have to look more into this.
Let me know how the SSD volume goes. I am debating changing from Cache to a Volume on my DS923+ and possibly my DS920+ as well to run a couple high performance VMs on that volume instead.
Also, will be blogging about it too.
Will do. Like I said, I feel like I’ve not seen much need for caching with backup data. Writes are pretty sequential I believe, and reads probably are too. But I’ll have to do the test and see if my theory is correct.
Same, the writes and reads should be pretty sequential so caching can become a moot point.
Great post. I’ve been on the fence about an older V7000 kicking around. I’d take the single encloser and fill it full of SSD’s for power, but the fan noise and power draw are 2 things holding me back.
If I end up moving to a house with a large garage keeping it out there would make more sense. My old lab in the basement use to provide a constant hum of fans for the whole house.
ProTip - If you run your entire network using virtualized servers, on a slight power blip, the TV and internet stay down while the server posts/boots, ESX boots, and your VM’s boot. haha.
I have learnt to not turn the lab in to “Home Production” because of that. I do have an Intel NUC running home assistant still, but it boots lightning fast, and if the internet goes down, I have alternate ways to turn on lights etc. I’ve read horror stories about guys who can’t turn on lights or do many things in their homes.
@dloseke - Do you do a lot of file transfers? For me, NVMe disks make the biggest difference for day to day tasks, boot times, and overall performance. I got a 10Gb card in my work PC now plugged directly into the switch. That makes a huge difference copying files around, but at home I don’t know if it would get utilized enough. Even on a 1Gb internet connection I can stream multiple 4k streams, play games and download files at the same time. When we switched to an all flash array at work things really started blazing.
I mentioned it in another post, but the lab I’ve finally got Veeam running on is a load of cobbled together kit from old gaming PCs/disks that I’ve never gotten rid of (so for any hoarders our there, please enjoy this positive reinforcement). Mostly AMD CPUs, 32GB RAM and all the RGB lighting turned off because this is now a work PC and not a gaming rig.
I use Proxmox for virtualisation- that's just using an internal SSD at the moment, but I’d like to explore a bit more into a shared-storage Kubernetes cluster to run any/all software I can, with other apps/services as VMs.
I’m liking the look of these mini PC/NUC alternatives, so will explore what a cluster of these would be like. Hopefully I can do some sort of shared storage cluster over them to make k8s unnecessarily bullet-proof!
Is anyone running TrueNAS at home/in the lab? Sounds like Synology is king here - I’ve never had the pleasure, though. TrueNAS Scale - the Linux-based version they forked a few years ago - is pretty solid for me now - that is running media, storage - and now - a Veeam repo . Again, I hear just as much positivity about alternatives such as QNap, Synology, UnRAID, etc. I don't think there is any perfect solution, so always good to know other peoples experiences.
Since reviewing the Veeam Security & Compliance Analyzer (nee Best Practices) checklist, I am also firing a Copy of the backup data out into Backblaze S2 to meet my 3, 2, 1 rule and backup copy practices, following a glowing review from a colleague (he was it was “really cheap” - I was sold).
I also want to run through a Linux Hardened Repo install too - if only to say I’ve done it.