Home Labs


Userlevel 7
Badge +7

Hi everybody!

Im writing this question more like as curiosity and to re-think in my home lab, 
I've been using a HP 360 Gen8 with 128GB Ram and 2TB SSD with vsphere 7 to host my full home lab: 
3 hosts Vsphere cluster, TrueNas for NAS & NFS storage, and pfsense for routing and firewalling, and a Windows Server 2016 with Veeam B&R.

Now Im thinking in something more “electricity friendly” , but I would love to read some setups and ideas from all of you, and maybe mixing them up a bit we can get with the SUPER HOME LAB!

thnx!
read you in the comments!


26 comments

Userlevel 7
Badge +20

My current setup -

4 x Intel NUC Skull Canyon with i7 6600HQ and 64GB RAM each (256GB total)

ESXi installed on 256GB NVME drive - hosts have 1TB NVME drive also so may revert back to USB boot to get VSAN going again now having more RAM

NAS - Synology DS920+ with DX517 expansion.  Total of 55TB of usable capacity -- Using combination of NFS/iSCSI for lab

16-port gigabit switch - thinking of upgrading and getting 2.5GB due to ISP

Sager Laptop with older i7 and 16GB of RAM - Windows 10 Pro

ISP - Bell 3GB Fiber connection - modem has a 10GB port on it so will try to leverage with the 2.5GB switch possibly but also need 2.5GB USB NICs for laptops

Userlevel 7
Badge +7

My current setup -

4 x Intel NUC Skull Canyon with i7 6600HQ and 64GB RAM each (256GB total)

ESXi installed on 256GB NVME drive - hosts have 1TB NVME drive also so may revert back to USB boot to get VSAN going again now having more RAM

NAS - Synology DS920+ with DX517 expansion.  Total of 55TB of usable capacity -- Using combination of NFS/iSCSI for lab

16-port gigabit switch - thinking of upgrading and getting 2.5GB due to ISP

Sager Laptop with older i7 and 16GB of RAM - Windows 10 Pro

ISP - Bell 3GB Fiber connection - modem has a 10GB port on it so will try to leverage with the 2.5GB switch possibly but also need 2.5GB USB NICs for laptops

Very cool Setup! OMG!

Userlevel 7
Badge +20

Also the Synology has NVME drives (280GB x 2) for Caching as well.  Need to upgrade these to 1TB or larger to split cache between two disk pools.

Userlevel 7
Badge +20

My current setup -

4 x Intel NUC Skull Canyon with i7 6600HQ and 64GB RAM each (256GB total)

ESXi installed on 256GB NVME drive - hosts have 1TB NVME drive also so may revert back to USB boot to get VSAN going again now having more RAM

NAS - Synology DS920+ with DX517 expansion.  Total of 55TB of usable capacity -- Using combination of NFS/iSCSI for lab

16-port gigabit switch - thinking of upgrading and getting 2.5GB due to ISP

Sager Laptop with older i7 and 16GB of RAM - Windows 10 Pro

ISP - Bell 3GB Fiber connection - modem has a 10GB port on it so will try to leverage with the 2.5GB switch possibly but also need 2.5GB USB NICs for laptops

Very cool Setup! OMG!

Thanks.  Took a bit to get it all but it does great for learning, studying, etc. 😁

Userlevel 7
Badge +17

My current setup -

4 x Intel NUC Skull Canyon with i7 6600HQ and 64GB RAM each (256GB total)

ESXi installed on 256GB NVME drive - hosts have 1TB NVME drive also so may revert back to USB boot to get VSAN going again now having more RAM

NAS - Synology DS920+ with DX517 expansion.  Total of 55TB of usable capacity -- Using combination of NFS/iSCSI for lab

16-port gigabit switch - thinking of upgrading and getting 2.5GB due to ISP

Sager Laptop with older i7 and 16GB of RAM - Windows 10 Pro

ISP - Bell 3GB Fiber connection - modem has a 10GB port on it so will try to leverage with the 2.5GB switch possibly but also need 2.5GB USB NICs for laptops

Great setup. 👍🏼

But I think it consumes far more power than @HunterLAFR’s setup, doesn't it?

Userlevel 7
Badge +20

My current setup -

4 x Intel NUC Skull Canyon with i7 6600HQ and 64GB RAM each (256GB total)

ESXi installed on 256GB NVME drive - hosts have 1TB NVME drive also so may revert back to USB boot to get VSAN going again now having more RAM

NAS - Synology DS920+ with DX517 expansion.  Total of 55TB of usable capacity -- Using combination of NFS/iSCSI for lab

16-port gigabit switch - thinking of upgrading and getting 2.5GB due to ISP

Sager Laptop with older i7 and 16GB of RAM - Windows 10 Pro

ISP - Bell 3GB Fiber connection - modem has a 10GB port on it so will try to leverage with the 2.5GB switch possibly but also need 2.5GB USB NICs for laptops

Great setup. 👍🏼

But I think it consumes far more power than @HunterLAFR’s setup, doesn't it?

Not a lot to be honest.  A server would consume more than my setup I would think due to the power draw.  I could be mistaken but it is not bad for me.

Userlevel 7
Badge +13

My current setup -

4 x Intel NUC Skull Canyon with i7 6600HQ and 64GB RAM each (256GB total)

ESXi installed on 256GB NVME drive - hosts have 1TB NVME drive also so may revert back to USB boot to get VSAN going again now having more RAM

NAS - Synology DS920+ with DX517 expansion.  Total of 55TB of usable capacity -- Using combination of NFS/iSCSI for lab

16-port gigabit switch - thinking of upgrading and getting 2.5GB due to ISP

Sager Laptop with older i7 and 16GB of RAM - Windows 10 Pro

ISP - Bell 3GB Fiber connection - modem has a 10GB port on it so will try to leverage with the 2.5GB switch possibly but also need 2.5GB USB NICs for laptops

Great setup. 👍🏼

But I think it consumes far more power than @HunterLAFR’s setup, doesn't it?

Not a lot to be honest.  A server would consume more than my setup I would think due to the power draw.  I could be mistaken but it is not bad for me.

 

Well at low workload it’s about 150W to 450+W at heavy usage. G8 series was a lot loud (fan noise) and not so much energy efficienty…
 

At idle @Chris.Childerhose setup could be (it’s only a rough estimate):

4 x 5W + 10W + 3W + 7W + 75W +15W = 130W

Userlevel 7
Badge +20

I’ve got a couple of servers for “burstable” workloads, if I need to suddenly pull up a ton of resources etc, I can do so. But as these are more power hungry, I try to avoid this.

 

My lab is a bit older now and due a refresh, but I have a 24x7 in-use home lab server. Using an Intel i5-4670T processor. I like using the T series Intel processors over the “mobile” HQs because these processors have standard ATX/mATX/mITX motherboard support, so you can chop and change what connectivity you want, whilst still using a processor that’s 35-45w, and then you’ve got the full motherboard functionality to underclock if you wanted to squeeze power further. This runs ESXi and runs my 24x7 needs, such as my media server, firewall, and wifi controller. By using this I can still leverage WoL if I wanted to get my more power hungry servers up and running whilst I’m on the road.

 

I’d suggest if you’re looking at upgrading, workstation class machines are pretty handy, they’re normally capable of beefy specs, or being energy efficient. I’d also try to stick to one socket and use DDR4L for example as these little efficiencies add up when under sustained use.

 

For my next refresh I was looking at the Intel T series but I have issues with Intel in general on the latest generations. 8 cores/16 threads gives a “35W” TDP, but only at about 1.4Ghz, so it’d be heavily boosting most of the time, plus 12th Gen (Alder Lake) needs its efficiency cores disabled to work with a hypervisor anyway, making it a lot of wasted silicon, and to boost to a decent 3-4Ghz frequency the TDP goes up to over 100W.

 

AMD on the other hand you can get the AMD Ryzen 7 5700GE running at 3.2Ghz on 8 cores with a 35W TDP, though these are OEM. But they also offer the 5700G with a default TDP of 65W (with higher base clocks) but a configurable TDP-down option to run at 45w “eco mode”, with a trade-off of lower boost & base clock speeds. In a lab environment, CPU overcommit isn’t really a problem and the main performance issues I always find are when the lab is paired with spinny disks. I just like to have 6-8 cores so that I can create the odd beefy VM where necessary (looking at you VDRO).

 

In summary, I’ve got old tech that is less efficient, driving my servers when I need them, but my always on server is efficient as can be, no turbo boost, low voltage DDR3 RAM, SSD, paired with a nice low RPM Noctua that I barely hear!

Userlevel 7
Badge +11

I use 2 physical hosts Lenovo TS150. Both have 32GB of RAM and some internal hard disks. They run vSphere 7.0 with vCenter.

For network I have Unifi family with routing and switching segregating some VLANs between my homelab and my wifi network personal use.

These servers are the first entry of Lenovo and the maximum of memory is 64GB. The good part is their consume of energy.

This is the current pic:

I need to update my internal disks for SSD, I would like to have a shared storage for both hosts too. Any tip or advice just let me know.

Userlevel 7
Badge +20

I use 2 physical hosts Lenovo TS150. Both have 32GB of RAM and some internal hard disks. They run vSphere 7.0 with vCenter.

For network I have Unifi family with routing and switching segregating some VLANs between my homelab and my wifi network personal use.

These servers are the first entry of Lenovo and the maximum of memory is 64GB. The good part is their consume of energy.

This is the current pic:

I need to update my internal disks for SSD, I would like to have a shared storage for both hosts too. Any tip or advice just let me know.

If you can find a cheap NAS that is probably the easiest for shared storage unless the servers can hold some and you deploy VSAN but that needs more RAM depending on the configuration.

Userlevel 7
Badge +8

I have an IBM M5 server with 512GB memory but I’ve recently decommed it. It had 8 or 10 1TB SAS SSD’s in it so it was great for my ESXI, Veeam, camera software, Home Assistant, and a bunch of other VM’s, but the fan noise was just too much. 

I moved Home Assistant to a NUC with an M.2 NVMe drive and couldn’t be happier.

 

I have a few 48 Gig POE switches but once again the fan noise was creeping up so I am only using 1 with a smaller non POE switch in my house now. 

 

I did have an IBM ds3400 and fiber switches at one point too, but unless you NEED that much storage, the power draw was pretty significant.  I have the potential to get a V7000 full of SSD’s soon to the lab with FC switches so perhaps i’ll go down that route, but this time I’ll get a powered plug to either automate turning it off and on when needed, or at least monitor how much it’s costing me to justify it.

 

For $20 you can get monitored plugs to see what your home lab is costing you. I highly recommend it.  That IBM server idled over 200W so I can calculate the ROI on my NUC to present to my wife :) 

Userlevel 7
Badge +20

I have an IBM M5 server with 512GB memory but I’ve recently decommed it. It had 8 or 10 1TB SAS SSD’s in it so it was great for my ESXI, Veeam, camera software, Home Assistant, and a bunch of other VM’s, but the fan noise was just too much. 

I moved Home Assistant to a NUC with an M.2 NVMe drive and couldn’t be happier.

 

I have a few 48 Gig POE switches but once again the fan noise was creeping up so I am only using 1 with a smaller non POE switch in my house now. 

 

I did have an IBM ds3400 and fiber switches at one point too, but unless you NEED that much storage, the power draw was pretty significant.  I have the potential to get a V7000 full of SSD’s soon to the lab with FC switches so perhaps i’ll go down that route, but this time I’ll get a powered plug to either automate turning it off and on when needed, or at least monitor how much it’s costing me to justify it.

 

For $20 you can get monitored plugs to see what your home lab is costing you. I highly recommend it.  That IBM server idled over 200W so I can calculate the ROI on my NUC to present to my wife :) 

Any brand recommended for the smart plug monitoring?  I might just get one to see my consumption for all my devices. 😁

Userlevel 7
Badge +8

I have an IBM M5 server with 512GB memory but I’ve recently decommed it. It had 8 or 10 1TB SAS SSD’s in it so it was great for my ESXI, Veeam, camera software, Home Assistant, and a bunch of other VM’s, but the fan noise was just too much. 

I moved Home Assistant to a NUC with an M.2 NVMe drive and couldn’t be happier.

 

I have a few 48 Gig POE switches but once again the fan noise was creeping up so I am only using 1 with a smaller non POE switch in my house now. 

 

I did have an IBM ds3400 and fiber switches at one point too, but unless you NEED that much storage, the power draw was pretty significant.  I have the potential to get a V7000 full of SSD’s soon to the lab with FC switches so perhaps i’ll go down that route, but this time I’ll get a powered plug to either automate turning it off and on when needed, or at least monitor how much it’s costing me to justify it.

 

For $20 you can get monitored plugs to see what your home lab is costing you. I highly recommend it.  That IBM server idled over 200W so I can calculate the ROI on my NUC to present to my wife :) 

Any brand recommended for the smart plug monitoring?  I might just get one to see my consumption for all my devices. 😁

If you have Wyze cams, their outdoor plug does some monitoring, you can use it later outside too. 

 

I got a few Zigbee ones from a company called Sengled. They are solid. I don’t use the app though because I have a Zigbee network with Home Assistant. I have too many things that I didn’t want to go the WIFI route.

 

They all do Google, Alexa, IFTTT etc, so Smart plugs, bulbs and motion sensors are pretty fun if you want to start doing some home automation stuff too.

 

If you google WIFI Smart plug, or WIFI power monitoring plug there are a ton of options.  I’ll eventually go full outlets in my walls or get a smart breaker box installed one day if I move. 

 

 

Userlevel 7
Badge +20

I have an IBM M5 server with 512GB memory but I’ve recently decommed it. It had 8 or 10 1TB SAS SSD’s in it so it was great for my ESXI, Veeam, camera software, Home Assistant, and a bunch of other VM’s, but the fan noise was just too much. 

I moved Home Assistant to a NUC with an M.2 NVMe drive and couldn’t be happier.

 

I have a few 48 Gig POE switches but once again the fan noise was creeping up so I am only using 1 with a smaller non POE switch in my house now. 

 

I did have an IBM ds3400 and fiber switches at one point too, but unless you NEED that much storage, the power draw was pretty significant.  I have the potential to get a V7000 full of SSD’s soon to the lab with FC switches so perhaps i’ll go down that route, but this time I’ll get a powered plug to either automate turning it off and on when needed, or at least monitor how much it’s costing me to justify it.

 

For $20 you can get monitored plugs to see what your home lab is costing you. I highly recommend it.  That IBM server idled over 200W so I can calculate the ROI on my NUC to present to my wife :) 

Any brand recommended for the smart plug monitoring?  I might just get one to see my consumption for all my devices. 😁

If you have Wyze cams, their outdoor plug does some monitoring, you can use it later outside too. 

 

I got a few Zigbee ones from a company called Sengled. They are solid. I don’t use the app though because I have a Zigbee network with Home Assistant. I have too many things that I didn’t want to go the WIFI route.

 

They all do Google, Alexa, IFTTT etc, so Smart plugs, bulbs and motion sensors are pretty fun if you want to start doing some home automation stuff too.

 

If you google WIFI Smart plug, or WIFI power monitoring plug there are a ton of options.  I’ll eventually go full outlets in my walls or get a smart breaker box installed one day if I move. 

 

 

Thanks for the tips.  I already have some smart plugs for lighting in my house and we have Alexa units all over the house (and 1 Google LOL).  So might check things out and see.

Userlevel 7
Badge +8

Nice, Look into Home Assistant to integrate it ALL together. I can use Google, Alexa, AND Siri to control things, but with zigbee devices if my internet goes down, I can still operate my lights and plugs. Once everything is ONLY connected to the cloud it gets tricky when that happens. Home assistant takes control of it all.

 

At this point for the few extra dollars on power monitoring per plug it’s worth it. I found out how many watts it takes for my band to practice and realized if my power goes out, I can plug all the amps, PA, mixer and a lamp into a battery backup for a while.    The homelab, old beer fridges, and other power hungry devices are fun to watch as well. 

 

I always hook up NW devices and PC’s to smart plugs too. Reason 1 is that I can remotely powercycle if it freezes, but after seeing what my gaming PC idles at, I have now enabled sleep mode rather than 24/7 Performance mode haha.     I didn’t realize it was costing me about $30 - $40 a month to keep it on all day. 

 

Userlevel 7
Badge +7

My current setup -

4 x Intel NUC Skull Canyon with i7 6600HQ and 64GB RAM each (256GB total)

ESXi installed on 256GB NVME drive - hosts have 1TB NVME drive also so may revert back to USB boot to get VSAN going again now having more RAM

NAS - Synology DS920+ with DX517 expansion.  Total of 55TB of usable capacity -- Using combination of NFS/iSCSI for lab

16-port gigabit switch - thinking of upgrading and getting 2.5GB due to ISP

Sager Laptop with older i7 and 16GB of RAM - Windows 10 Pro

ISP - Bell 3GB Fiber connection - modem has a 10GB port on it so will try to leverage with the 2.5GB switch possibly but also need 2.5GB USB NICs for laptops

Great setup. 👍🏼

But I think it consumes far more power than @HunterLAFR’s setup, doesn't it?

Not a lot to be honest.  A server would consume more than my setup I would think due to the power draw.  I could be mistaken but it is not bad for me.

 

Well at low workload it’s about 150W to 450+W at heavy usage. G8 series was a lot loud (fan noise) and not so much energy efficienty…
 

At idle @Chris.Childerhose setup could be (it’s only a rough estimate):

4 x 5W + 10W + 3W + 7W + 75W +15W = 130W

Noise was not an issue for me, I have the server in a remote location, and being on 24x7 costs me tike 10 to 15 € per month, not too expensive for studying and being accessible all the time.

It’s very cool to read and see others labs, it’s like watching cabling before and now pictures!

Userlevel 7
Badge +7

I use 2 physical hosts Lenovo TS150. Both have 32GB of RAM and some internal hard disks. They run vSphere 7.0 with vCenter.

For network I have Unifi family with routing and switching segregating some VLANs between my homelab and my wifi network personal use.

These servers are the first entry of Lenovo and the maximum of memory is 64GB. The good part is their consume of energy.

This is the current pic:

I need to update my internal disks for SSD, I would like to have a shared storage for both hosts too. Any tip or advice just let me know.

If you can find a cheap NAS that is probably the easiest for shared storage unless the servers can hold some and you deploy VSAN but that needs more RAM depending on the configuration.

A cheap synology will do the trick, or deploy virtual storage, but then you loose the capability to have fault tolerance, but good enough for some vMotion and DRS tests!

Userlevel 7
Badge +6

I posted my work lab in another post a couple months back.  It’s not terribly power friendly ATM because it’s using an Equallogic PS6000 for the primary storage, but that will be replaced with an even less power friendly PS6210E down the road (more spinning disk).  That said, my hosts, consisting of 6ish R610’s will be replaced with R720’s and a R820 (I think).  Not sure if the 12th gen Dell’s are more power efficient than the 11th, but I believe the procs are more robust so that may be more of a break-even.  And then there is a HP Z-series workstation running as my Veeam host with VM’s running VBR and VBO (and VDRO in the near future).

That said, my home lab is really quite small - a R520, an Extreme Summit 40-port Gigabit POE switch, Ubiquiti USG Pro and a UBNT AC Pro AP (need to expand my wireless).  Also have a small Synology DS218+ NAS for my backup repo, accessed via NFS.

Userlevel 7
Badge +6

I use 2 physical hosts Lenovo TS150. Both have 32GB of RAM and some internal hard disks. They run vSphere 7.0 with vCenter.

For network I have Unifi family with routing and switching segregating some VLANs between my homelab and my wifi network personal use.

These servers are the first entry of Lenovo and the maximum of memory is 64GB. The good part is their consume of energy.

This is the current pic:

I need to update my internal disks for SSD, I would like to have a shared storage for both hosts too. Any tip or advice just let me know.

 

I like this install.  Simple and clean.  Just need some shared storage IMO unless you’re running some VSAN or other virtualized storage.

Userlevel 5
Badge

I have

2x DL360 G9 - 64Gb and 96Gb Memory - 2Tb disk
2x DL360 G7 - 112Gb Memory - 2Tb disks
1x Synology - 30Tb
1x HP Procurve 2848
1x Desktop with 10Tb just used for Backups.

Of course, I don't have everything power on all the time. Since I bough the G9, I always use only this one. Only when I need to power on my VCF, vSAN, NSXT nested lab I power on G7 because of memory.

Trying to find budged to get more memory to the G9 so I can decommission my G7 and try to sell them.

Userlevel 7
Badge +8

That is a decent amount of disk!!

Userlevel 7
Badge +20

Reviving the talk of lab power consumption from earlier in the thread, I grabbed myself an Eve Smartplug at the weekend, it’s homekit integrated, has remote power on/off functionality, which still worked via bluetooth to turn back on when I accidentally switched my firewall off during testing 😂

 

It monitors in realtime and can be queried as much as possible, I like that it doesn’t have any remote servers to call out to, and it doesn’t actually even connect to Wi-Fi to be able to stealthily send/receive any kind of data. Firmware updates are downloaded to your app then sent onto your Eve device etc, really nice from a privacy mindset when the sea of IoT devices is very questionable in this regard… Instead it uses Thread & Bluetooth for all communications 😁 My 24x7 ESXi server costs me around 30p a day, averages 30w total power draw from the wall, with a peak so far of 42w as I’ve got turbo boost etc all disabled and power saving modes defined on my motherboard & within OS.

Userlevel 7
Badge +8

That is impressive power usage. My Intel NUC is great, my Server was idle at over 200W before so it’s off most of the time these days. 

 

Currently focusing on a decent SSD array as spinning disk tend to draw a fair bit more.  I monitor everything these days, but I’d also advise everyone looking at smart home integration / plugs that the new standard called MATTER is going to be out soon and allow ALL devices to talk together. No more vendor lock on, and it’s going to be included with google, apple, homekit, phillips, and everything else so i have been holding off. 

Userlevel 7
Badge +6

Reviving the talk of lab power consumption from earlier in the thread, I grabbed myself an Eve Smartplug at the weekend, it’s homekit integrated, has remote power on/off functionality, which still worked via bluetooth to turn back on when I accidentally switched my firewall off during testing 😂

 

It monitors in realtime and can be queried as much as possible, I like that it doesn’t have any remote servers to call out to, and it doesn’t actually even connect to Wi-Fi to be able to stealthily send/receive any kind of data. Firmware updates are downloaded to your app then sent onto your Eve device etc, really nice from a privacy mindset when the sea of IoT devices is very questionable in this regard… Instead it uses Thread & Bluetooth for all communications 😁 My 24x7 ESXi server costs me around 30p a day, averages 30w total power draw from the wall, with a peak so far of 42w as I’ve got turbo boost etc all disabled and power saving modes defined on my motherboard & within OS.

 

I got a clamp-on ammeter the other day and clipped it to on of my main leads and then turned on my pool pump.  Verified with it off as well.  Did the math, and it looks like my pool pump running 24/7 costs me about $35 per month to run.  I should power down my homelab and see what the difference is as well…..I’m betting it’s not nearly as bad.  Guessing maybe $5.

Userlevel 5
Badge

I have

2x DL360 G9 - 64Gb and 96Gb Memory - 2Tb disk
2x DL360 G7 - 112Gb Memory - 2Tb disks
1x Synology - 30Tb
1x HP Procurve 2848
1x Desktop with 10Tb just used for Backups.

Of course, I don't have everything power on all the time. Since I bough the G9, I always use only this one. Only when I need to power on my VCF, vSAN, NSXT nested lab I power on G7 because of memory.

Trying to find budged to get more memory to the G9 so I can decommission my G7 and try to sell them.

 

Just a correction on my Synology DS1515+ is 10Tb and not 30Tb(don't know where I get those 30)

Comment