Questions on Homelab setup


Userlevel 7
Badge +9

I wish to build a new HOMELAB. Here is a guide and many others I found on Reddit.
- How is your lab setup? What would you recommend? Main virtualisation solution would be VMware (would need multiple hosts here). I would also have HyperV and Proxmox VE within this environment. Yes, this is possible!

I have got my ideas, but would like to learn from you. Keep in mind, I need a cost effective solution ranging from hardware to Power etc. Here is a talking point but just talking about the installation. I do NOT need help with the setup (deployment) tips as this is a walkthrough for me. Just ideas on how to setup low a cost effective lab, but yet a Lab to be reckoned with.

  •  


62 comments

Userlevel 7
Badge +20

Hi!

 

So, the beauty of a home lab is, you can adjust to exactly what you need.

 

I had a 24x7 media server at home that I actually P2V’d yesterday with Veeam Agent, and installed onto ESXi, I’ve had no increase in power consumption, because the server was on all the time. But the benefit? Rather than it running VMware Workstarion and having 1st class & 2nd class workloads (AKA, poor performance on workstation relative to anything running native on Windows), now it’s all equally distributed. I’m using this host for my 24x7 needs: firewall & VPN, Wifi controller, and media server, gonna move my VB365 VM onto ESXi too next weekend.

 

So that’s my 24x7 host, spec’d as a quad core with 16GB RAM and SSD storage.

 

Then I have my extra 2x hosts that I power on when I need them for my lab. These are quad core with 32GB RAM but only traditional HDDs as they’re HPE and if you use aftermarket disks with HPE the fans go constant 100% spin speed.

 

This way I can leverage iLO to power on my extra hosts if/when I need them, even if I’m working away from home. But this way I don’t have a dramatic power bill.

 

I’ve seen some great options with labs recently and there’s a great 2nd hand market to get recent generation tech for £1-2k that is powerful enough you only need the one host. But I would suggest if you consolidate and are thinking of nested ESXi/Hyper-V etc then you need to consider the power draw of that server, will it be running 24x7 and wasting a ton of power & cooling idly.

 

Also worth checking out are the workstation class machines from Dell/HPE etc. These are missing some features such as iDRAC normally depending on vendor, but they’re far more compact normally and can still get dual socket with hundreds of GBs of RAM and more desktop class SSD/NVMe (a lot cheaper!).

 

Another approach can be with lower power/compact devices such as Intel NUC(s), it’s quite common to have a 3-4 node cluster of NUCs, their main limitation being direct attached storage options and add-in card support, but I see a lot of people use a reputable brand NAS such as Synology or QNAP and present some shared storage to the nodes that way.

 

Another consideration will be licensing. You’re not gonna want to purchase full retail VMware licensing for a lab. But you’d either be stuck reinstalling ESXi every 60 days for a trial license, or using the free version that doesn’t support any API access for backups, cluster support etc. I’d suggest either the VMware vExpert program if you’re accepted to get NFR Licenses, alternatively there’s the VMUG Advantage program. Both scenarios give you licenses valid for a year, VMUG Advantage costs money, normally $200 a year, but there are times you can get a license at a discount. Well known VMware employee William Lam is currently organising a group purchase which should be approx $170 instead, otherwise VMware have typically done discounts around VMware World events etc. This program also includes discounts on training & exams amongst the perks. Link here: https://williamlam.com/2022/06/2022-vmug-advantage-community-group-buy.html

 

Hope this helps! Happy to answer any Q’s 🙂

Userlevel 7
Badge +17

Very detailled answer. 😀 And it answered my question about VMware license options, too… 😎

Hi there,
For my home lab, I virtualized everything into a HP gen 8 server, located in a warhouse near my house, and inside I run my base vms, like DC, files, etc… and also inside I have two esxi hosts running as a cluster, and a Veeam B&R machine as a Backup server.

Im running an article to describe better my lab, and if was less than 600€, and electricity consumption is ok.

for licenses, I belong to the vmug advanced program, so I have access to them, and also to the Veeam NFB licenses as well.

Userlevel 7
Badge +6

My “home lab” consists currently of a standalone PowerEdge R520 hand-me-down running ESXI 7.0 U3.  It’s running a few VM’s, and it’s somewhat used as a second site.  Previously it was an R610 that recently kicked the can.  Prior to that it was an IBM x3550 M2 and a x3650.  I have a Synology DS218+ as a backup repo and am running an Extreme Networks Summit 48 Port POE switch with a Ubiquiti USG Pro for the firewall, and AC Pro AP (need to expand my wireless). VM’s include a management/utility server (runs the Ubiquiti management app), VBR 11a, VBO, Pi-Hole, and a NVR (Milestone) for my older POE camera’s (Hikvision among others).  It’s all small and really more than I need for home use.  My time an energy, when I can find it, is dedicated to my “work lab”.

My work lab consists of 6 PowerEdge R610’s running ESXI 6.7 (one died, I have three more in another room not in use).  They are using an Equallogic PS6000 for shared storage through two Dell PowerConnect 6248 switches in a stack.  I’m using an old ASA5505 for the firewall.  My lab network extends to another rack in a separate room where I perform staging of client equipment and is interconnected with an Adtran NetVanta 1534 switch. There is a HP Z520 workstation with an added RAID card and drives that is running ESXI as well.  It has 2 VM’s, one running VBR 11a to backup the lab VM’s, and the other is slated for VDRO but I haven’t had a chance to set it up.  There is an APC 3000 VA UPS running things (currently I get about 7 minutes of runtime, but that’s good enough for power blinks).  VMware and Veeam are both NFR licenses for the lab environment.  I also have a couple R410’s laying around but they’re quite light on power so they’re not used.  All VM’s are Server 2016 or Server 2019 with a full domain.  I have more infrastructure here than most of my clients, but it has been proven to be handy multiple times.

Future plans for the work lab are to upgrade the SAN to a PS6210e that is already racked, and replacing the hosts with two R720’s, and an R820 that I have (once one of the R720’s returns from a client site).  I’m also in the process of replacing two R720’s in production with R730’s which should free up two more R720’s for lab use.  The lab will be upgraded to the latest build of ESXI 7.0 U3 as well although I might setup two of the R720’s into a Hyper-V failover cluster.  I was going to do AHV (Nutanix) at one point, but it’s less common for me and I’m starting to see more customers with Hyper-V, so labbing up that would be a better use of resources for me.

Your lab is what you make it, and that is often based on what you need and what you can get or afford.  I’m a bit of a scavenger as you can probably see.  Older servers tend to still have some life in them, parts may be easier to find if you have extra cold spares, etc.  However, they often come at the cost of inefficiency as well.  Some older servers are quite power hungry and can generate significant heat.  For me, the most important items are determining what the goals of the lab environment are and building around that.  Also need to have flexibility and versatility in the lab so that you can spin up whatever it is that you want or need to run without rebuilding everything.  Many folks get away with running on older workstations, NOC’s, Raspberry Pi’s, etc.  Seems like a lot of folks like to run Proxmox.  I’ve never dealt with it, but the little bit I’ve heard, I’m not sure why I at least would when I have perfectly good VMware.  I’ve never seen Proxmox running in a production environment, so it makes no sense to me to run it.  In a true home lab though, if you just need something to run your infrastructure, it might make sense, but if I need to emulate an actual production environment, it’s best to run a similar to production as possible.

Userlevel 7
Badge +20

Pack up your things everyone, think @dloseke won’t be beaten on this! haha. That’s a nice setup there!

Off topic but as you mentioned Ubiquiti:

I was thinking of whether or not to get a USG-Pro as I’ve got their WiFi 6 APs, my non 24x7 lab was running the VM whenever I’d boot it to check for updates etc, but I was clearly missing out on some of the other softer benefits too, I thought the USG-Pro could be handy for managing that as an added bonus of the device. I’m moving home shortly and due to get 1Gbps internet, hence the looking at USG-Pro. In the end I decided that my 24x7 ESXi server is powerful enough to process 1Gbps of traffic with IDS/IPS rules, so I put a new Intel X540-T2 card in for future growth up to 10Gbps for LAN & WAN, and am now running PfSense on that. So I have created an Ubuntu VM with Ubiquiti’s management software on there instead with some of the spare host resources 😆

Userlevel 7
Badge +6

Pack up your things everyone, think @dloseke won’t be beaten on this! haha. That’s a nice setup there!

Off topic but as you mentioned Ubiquiti:

I was thinking of whether or not to get a USG-Pro as I’ve got their WiFi 6 APs, my non 24x7 lab was running the VM whenever I’d boot it to check for updates etc, but I was clearly missing out on some of the other softer benefits too, I thought the USG-Pro could be handy for managing that as an added bonus of the device. I’m moving home shortly and due to get 1Gbps internet, hence the looking at USG-Pro. In the end I decided that my 24x7 ESXi server is powerful enough to process 1Gbps of traffic with IDS/IPS rules, so I put a new Intel X540-T2 card in for future growth up to 10Gbps for LAN & WAN, and am now running PfSense on that. So I have created an Ubuntu VM with Ubiquiti’s management software on there instead with some of the spare host resources 😆

 

I wouldn’t buy a USG personally.  It was given to me but I don’t love it either.  I have an old WatchGuard that I flashed with PfSense a couple years ago that I suspect I could reflash with OPNsense.  Just don’t have much time for that at home. I will say that I have 1Gb Internet and it works great for me, but it actually has messages about it reducing throughput when you have more of the IDS and other advanced features turned on, I believe due to increased processor load, but for home use, it hasn’t been an issue.  But I originally ran into some issues with stability and had to search for what features I had to turn on/off to make it work that didn’t make a ton of sense.

Some of the Ubiquiti is prosumer at best.  Their AP’s, point to point wireless and I’ve been told switches are pretty good.  The security products…..not sold on it.  There’s a lot of folks that feel like every firmware release is a beta test.  Looking in forums or Reddit, there’s a lot of talk of certain firmware releases that are stable, and then certain features missing in the next version, back in the one after that, etc.  I would never recommend them for business use although I do have a couple clients using their AP’s, but we go to Ruckus or Meraki for that.

Userlevel 7
Badge +6

#Humblebrag

The current work lab…..also looks nicer than a lot of productions environments I’ve seen.  But alas...it needs some reworking as it gets updated to version 2.  The pics are a bit Veeam off topic, but that HP behind the monitor is the box running the VBR/VDRO VM’s, and there’s proxies in the ESXI cluster to keep it Veeam related.

 

Userlevel 7
Badge +9

Hi!

 

So, the beauty of a home lab is, you can adjust to exactly what you need.

 

I had a 24x7 media server at home that I actually P2V’d yesterday with Veeam Agent, and installed onto ESXi, I’ve had no increase in power consumption, because the server was on all the time. But the benefit? Rather than it running VMware Workstarion and having 1st class & 2nd class workloads (AKA, poor performance on workstation relative to anything running native on Windows), now it’s all equally distributed. I’m using this host for my 24x7 needs: firewall & VPN, Wifi controller, and media server, gonna move my VB365 VM onto ESXi too next weekend.

 

So that’s my 24x7 host, spec’d as a quad core with 16GB RAM and SSD storage.

 

Then I have my extra 2x hosts that I power on when I need them for my lab. These are quad core with 32GB RAM but only traditional HDDs as they’re HPE and if you use aftermarket disks with HPE the fans go constant 100% spin speed.

 

This way I can leverage iLO to power on my extra hosts if/when I need them, even if I’m working away from home. But this way I don’t have a dramatic power bill.

 

I’ve seen some great options with labs recently and there’s a great 2nd hand market to get recent generation tech for £1-2k that is powerful enough you only need the one host. But I would suggest if you consolidate and are thinking of nested ESXi/Hyper-V etc then you need to consider the power draw of that server, will it be running 24x7 and wasting a ton of power & cooling idly.

 

Also worth checking out are the workstation class machines from Dell/HPE etc. These are missing some features such as iDRAC normally depending on vendor, but they’re far more compact normally and can still get dual socket with hundreds of GBs of RAM and more desktop class SSD/NVMe (a lot cheaper!).

 

Another approach can be with lower power/compact devices such as Intel NUC(s), it’s quite common to have a 3-4 node cluster of NUCs, their main limitation being direct attached storage options and add-in card support, but I see a lot of people use a reputable brand NAS such as Synology or QNAP and present some shared storage to the nodes that way.

 

Another consideration will be licensing. You’re not gonna want to purchase full retail VMware licensing for a lab. But you’d either be stuck reinstalling ESXi every 60 days for a trial license, or using the free version that doesn’t support any API access for backups, cluster support etc. I’d suggest either the VMware vExpert program if you’re accepted to get NFR Licenses, alternatively there’s the VMUG Advantage program. Both scenarios give you licenses valid for a year, VMUG Advantage costs money, normally $200 a year, but there are times you can get a license at a discount. Well known VMware employee William Lam is currently organising a group purchase which should be approx $170 instead, otherwise VMware have typically done discounts around VMware World events etc. This program also includes discounts on training & exams amongst the perks. Link here: https://williamlam.com/2022/06/2022-vmug-advantage-community-group-buy.html

 

Hope this helps! Happy to answer any Q’s 🙂

> I have seen some great options with labs recently and there’s a great 2nd hand market to get recent generation tech for £1-2k that is powerful enough you only need the one host

Thank you @MicoolPaul! This is just detailed, and I appreciate you. I have actually considered this, but have been held back to the following choice you mentioned and I quote “Also worth checking out are the workstation class machines from Dell/HPE etc.” .  I am a VMware vExpert, and I have scaled the license hurdle. The Later is a very good consideration! Thank you for sharing once again

Userlevel 7
Badge +9

Hi there,
For my home lab, I virtualized everything into a HP gen 8 server, located in a warhouse near my house, and inside I run my base vms, like DC, files, etc… and also inside I have two esxi hosts running as a cluster, and a Veeam B&R machine as a Backup server.

Im running an article to describe better my lab, and if was less than 600€, and electricity consumption is ok.

for licenses, I belong to the vmug advanced program, so I have access to them, and also to the Veeam NFB licenses as well.

Hi @HunterLF, Good to know. HP gen 8 are pretty cost effective. I will do my due deligence on this. 

Many thanks!

Userlevel 7
Badge +9

My “home lab” consists currently of a standalone PowerEdge R520 hand-me-down running ESXI 7.0 U3.  It’s running a few VM’s, and it’s somewhat used as a second site.  Previously it was an R610 that recently kicked the can.  Prior to that it was an IBM x3550 M2 and a x3650.  I have a Synology DS218+ as a backup repo and am running an Extreme Networks Summit 48 Port POE switch with a Ubiquiti USG Pro for the firewall, and AC Pro AP (need to expand my wireless). VM’s include a management/utility server (runs the Ubiquiti management app), VBR 11a, VBO, Pi-Hole, and a NVR (Milestone) for my older POE camera’s (Hikvision among others).  It’s all small and really more than I need for home use.  My time an energy, when I can find it, is dedicated to my “work lab”.

My work lab consists of 6 PowerEdge R610’s running ESXI 6.7 (one died, I have three more in another room not in use).  They are using an Equallogic PS6000 for shared storage through two Dell PowerConnect 6248 switches in a stack.  I’m using an old ASA5505 for the firewall.  My lab network extends to another rack in a separate room where I perform staging of client equipment and is interconnected with an Adtran NetVanta 1534 switch. There is a HP Z520 workstation with an added RAID card and drives that is running ESXI as well.  It has 2 VM’s, one running VBR 11a to backup the lab VM’s, and the other is slated for VDRO but I haven’t had a chance to set it up.  There is an APC 3000 VA UPS running things (currently I get about 7 minutes of runtime, but that’s good enough for power blinks).  VMware and Veeam are both NFR licenses for the lab environment.  I also have a couple R410’s laying around but they’re quite light on power so they’re not used.  All VM’s are Server 2016 or Server 2019 with a full domain.  I have more infrastructure here than most of my clients, but it has been proven to be handy multiple times.

Future plans for the work lab are to upgrade the SAN to a PS6210e that is already racked, and replacing the hosts with two R720’s, and an R820 that I have (once one of the R720’s returns from a client site).  I’m also in the process of replacing two R720’s in production with R730’s which should free up two more R720’s for lab use.  The lab will be upgraded to the latest build of ESXI 7.0 U3 as well although I might setup two of the R720’s into a Hyper-V failover cluster.  I was going to do AHV (Nutanix) at one point, but it’s less common for me and I’m starting to see more customers with Hyper-V, so labbing up that would be a better use of resources for me.

Your lab is what you make it, and that is often based on what you need and what you can get or afford.  I’m a bit of a scavenger as you can probably see.  Older servers tend to still have some life in them, parts may be easier to find if you have extra cold spares, etc.  However, they often come at the cost of inefficiency as well.  Some older servers are quite power hungry and can generate significant heat.  For me, the most important items are determining what the goals of the lab environment are and building around that.  Also need to have flexibility and versatility in the lab so that you can spin up whatever it is that you want or need to run without rebuilding everything.  Many folks get away with running on older workstations, NOC’s, Raspberry Pi’s, etc.  Seems like a lot of folks like to run Proxmox.  I’ve never dealt with it, but the little bit I’ve heard, I’m not sure why I at least would when I have perfectly good VMware.  I’ve never seen Proxmox running in a production environment, so it makes no sense to me to run it.  In a true home lab though, if you just need something to run your infrastructure, it might make sense, but if I need to emulate an actual production environment, it’s best to run a similar to production as possible.

Hello @dloseke, thank you very much for sharing your thoughts. New server are really expensive except for republished servers. 

> My work lab consists of 6 PowerEdge R610’s running ESXI 6.7 (one died, I have three more in another room not in use). 

This is a great deal for me right now. How do you deal with the Power consumption? And from experience, I will have to agree with you and I quote “However, they often come at the cost of inefficiency as well”. This is why I am sceptical...

> For me, the most important items are determining what the goals of the lab environment are and building around that

I absolutely agree with you!

> Seems like a lot of folks like to run Proxmox. 

This is rarely used. I once worked with a small datacenter that uses Proxmox even till now. Except you want support, the solution is entirely free… But there are other great solutions developed by Proxox such as Proxmox Mail Gateway etc..

This input is so detailed as well.. I love this part of you “ Being a scavenger”! This is the best way to learn.. You rock!!!

Userlevel 7
Badge +9

Pack up your things everyone, think @dloseke won’t be beaten on this! haha. That’s a nice setup there!

Off topic but as you mentioned Ubiquiti:

I was thinking of whether or not to get a USG-Pro as I’ve got their WiFi 6 APs, my non 24x7 lab was running the VM whenever I’d boot it to check for updates etc, but I was clearly missing out on some of the other softer benefits too, I thought the USG-Pro could be handy for managing that as an added bonus of the device. I’m moving home shortly and due to get 1Gbps internet, hence the looking at USG-Pro. In the end I decided that my 24x7 ESXi server is powerful enough to process 1Gbps of traffic with IDS/IPS rules, so I put a new Intel X540-T2 card in for future growth up to 10Gbps for LAN & WAN, and am now running PfSense on that. So I have created an Ubuntu VM with Ubiquiti’s management software on there instead with some of the spare host resources 😆

Great! Nice to know @MicoolPaul is an allrounder as well 

Userlevel 7
Badge +6

 

Hello @dloseke, thank you very much for sharing your thoughts. New server are really expensive except for republished servers. 

> My work lab consists of 6 PowerEdge R610’s running ESXI 6.7 (one died, I have three more in another room not in use). 

This is a great deal for me right now. How do you deal with the Power consumption? And from experience, I will have to agree with you and I quote “However, they often come at the cost of inefficiency as well”. This is why I am sceptical…

 

 

It hasn’t been much of an issue for me since I’m only running one or at best two servers at home, and electricity is relatively inexpensive in Nebraska.  At the office….well, it’s the office.  I don’t pay the bill.

 

Userlevel 7
Badge +9

 

Hello @dloseke, thank you very much for sharing your thoughts. New server are really expensive except for republished servers. 

> My work lab consists of 6 PowerEdge R610’s running ESXI 6.7 (one died, I have three more in another room not in use). 

This is a great deal for me right now. How do you deal with the Power consumption? And from experience, I will have to agree with you and I quote “However, they often come at the cost of inefficiency as well”. This is why I am sceptical…

 

 

It hasn’t been much of an issue for me since I’m only running one or at best two servers at home, and electricity is relatively inexpensive in Nebraska.  At the office….well, it’s the office.  I don’t pay the bill.

 

+1… Same strategy as @MicoolPaul. They should run only when needed. This is what I currently do with my Workstations whenever I ain't using them, and it will me extended to my new lab. 

Userlevel 7
Badge +20

Another comment to make, be sure that your CPU generation is supported on ESXi 7.0, the older versions of ESXi are all EoL in October. If you’re investing in a lab, you want to know that you can keep up with the latest software that your customers should be running 🙂

Userlevel 7
Badge +20

@dloseke agree on the Ubiquiti comments there, I’ve always been a Cisco guy, I even tried to see if I could get any Cisco Meraki GO hardware that has been released. Unfortunately, Meraki GO is basically last-gen EoL hardware to put it nicely. Cisco doesn’t have a WiFi 6 Meraki GO solution, I can only assume to avoid competing with their Meraki ranges that require a subscription. The Meraki GO firewall was off-putting too, at 250Mbps.

The WiFi access points are solid for me, haven’t had any firmware issues yet but I’m painfully aware they’re out there from my MSP previous life. Can’t say I’ve tried any Ruckus hardware either for comparison. My main critique of the WiFi 6 Ubiquiti devices are that despite being able to use 5Ghz @ 160Mhz channel width, the device only has a single 1Gbps port, even though it’s possible to push more than 1Gbps through the APs. But I can understand there aren’t many environments and scenarios whereby pushing 1Gbps+ speeds through WiFi are actually required yet.

Userlevel 7
Badge +6

Another comment to make, be sure that your CPU generation is supported on ESXi 7.0, the older versions of ESXi are all EoL in October. If you’re investing in a lab, you want to know that you can keep up with the latest software that your customers should be running 🙂

For lab environment, I’m not as concerned about being *supported* on my proc (also note that some models of storage adapters and network cards are deprecated under certain versions of ESXI and will not show up as those builds lack the proper drivers).  For instance, ESXI 7 is not *supported* on Dell 11th and 12th Gen hardware.  HOWEVER, it will run provided your firmware is up to date.  Just don’t expect support.  In my case, I don’t have support anyway, as this is NFR licensed gear.

Userlevel 7
Badge +13

Hey @Iams3le the link you post contain a redirection from a Veeam competitor, can you clean it? 😂

Userlevel 7
Badge +20

Another comment to make, be sure that your CPU generation is supported on ESXi 7.0, the older versions of ESXi are all EoL in October. If you’re investing in a lab, you want to know that you can keep up with the latest software that your customers should be running 🙂

For lab environment, I’m not as concerned about being *supported* on my proc (also note that some models of storage adapters and network cards are deprecated under certain versions of ESXI and will not show up as those builds lack the proper drivers).  For instance, ESXI 7 is not *supported* on Dell 11th and 12th Gen hardware.  HOWEVER, it will run provided your firmware is up to date.  Just don’t expect support.  In my case, I don’t have support anyway, as this is NFR licensed gear.

Understandable that it’s not always possible to achieve compatibility, I tend not to care about the CPU itself being certified, for example my 24x7 server is an Intel Core T series processor, for its low power consumption at a respectable frequency, but I absolutely want the architecture supported as otherwise it’s certainly possible that ESXi would go to utilise instruction sets not supported on the processor anymore, it’s a limited and specific scenario, but as someone that has had their fingers burned with Broadcom not supporting RSS triggering a PSOD on 6.7, I try to maintain compliance with such things.

 

It’s all a game of calculated risk though when you’re home labbing anyway!

Userlevel 7
Badge +7

#Humblebrag

The current work lab…..also looks nicer than a lot of productions environments I’ve seen.  But alas...it needs some reworking as it gets updated to version 2.  The pics are a bit Veeam off topic, but that HP behind the monitor is the box running the VBR/VDRO VM’s, and there’s proxies in the ESXI cluster to keep it Veeam related.

 

That is one sweet setup!

Userlevel 7
Badge +7

Throwing this out there, but would having a cloud based lab be considered a home lab?

Especially with the option to automate deployment of an environment and the ability to quickly re-create it again. Suppose, one disadvantage is not having access to the underlying hardware to tinker on. 

Userlevel 7
Badge +9

Throwing this out there, but would having a cloud based lab be considered a home lab?

Especially with the option to automate deployment of an environment and the ability to quickly re-create it again. Suppose, one disadvantage is not having access to the underlying hardware to tinker on. 

Great point, but I wouldn’t consider it as a lab @dips… I might be wrong, but I see or classify it as a learning / training environment.

Userlevel 7
Badge +9

#Humblebrag

The current work lab…..also looks nicer than a lot of productions environments I’ve seen.  But alas...it needs some reworking as it gets updated to version 2.  The pics are a bit Veeam off topic, but that HP behind the monitor is the box running the VBR/VDRO VM’s, and there’s proxies in the ESXI cluster to keep it Veeam related.

 

That is one sweet setup!

+1

Userlevel 7
Badge +6

Throwing this out there, but would having a cloud based lab be considered a home lab?

Especially with the option to automate deployment of an environment and the ability to quickly re-create it again. Suppose, one disadvantage is not having access to the underlying hardware to tinker on. 

I think that it’s almost a requirement.  I mean, you see the likes of Rick Vanover and Anthony Spiteri run labs that I think span both premise and cloud.  For sure they have to have cloud to spin up labs for cloud-base products, etc.  I mean, it’s not all of the sexiness of running hardware in your own datacenter, but for the purposes of learning and testing, I suppose it’s pretty much a requirement.  In fact, the other day I was looking for free/NFR versions of Azure to do some testing of cloud-based services as well as extending to object storage in Blob, etc.

Userlevel 7
Badge +20

Throwing this out there, but would having a cloud based lab be considered a home lab?

Especially with the option to automate deployment of an environment and the ability to quickly re-create it again. Suppose, one disadvantage is not having access to the underlying hardware to tinker on. 

I think that it’s almost a requirement.  I mean, you see the likes of Rick Vanover and Anthony Spiteri run labs that I think span both premise and cloud.  For sure they have to have cloud to spin up labs for cloud-base products, etc.  I mean, it’s not all of the sexiness of running hardware in your own datacenter, but for the purposes of learning and testing, I suppose it’s pretty much a requirement.  In fact, the other day I was looking for free/NFR versions of Azure to do some testing of cloud-based services as well as extending to object storage in Blob, etc.

Don’t know how you fared in your search, but if your org has an MSDN subscription, you should be able to get Azure Dev credits of around $150 per month (converted to your local currency if outside of the US). There’s some restrictions on what regions you can access certain resources, primarily where resource is more constrained, but it’s great.

Userlevel 7
Badge +6

Throwing this out there, but would having a cloud based lab be considered a home lab?

Especially with the option to automate deployment of an environment and the ability to quickly re-create it again. Suppose, one disadvantage is not having access to the underlying hardware to tinker on. 

I think that it’s almost a requirement.  I mean, you see the likes of Rick Vanover and Anthony Spiteri run labs that I think span both premise and cloud.  For sure they have to have cloud to spin up labs for cloud-base products, etc.  I mean, it’s not all of the sexiness of running hardware in your own datacenter, but for the purposes of learning and testing, I suppose it’s pretty much a requirement.  In fact, the other day I was looking for free/NFR versions of Azure to do some testing of cloud-based services as well as extending to object storage in Blob, etc.

Don’t know how you fared in your search, but if your org has an MSDN subscription, you should be able to get Azure Dev credits of around $150 per month (converted to your local currency if outside of the US). There’s some restrictions on what regions you can access certain resources, primarily where resource is more constrained, but it’s great.

 

Thanks, I’ll check into that.  We don’t have MSDN right now but it wouldn’t hurt us any.  I know we get some Azure credits from our partnership, but I don’t know how much nor do I know how much we’re using them at the moment.  But thanks!

 

Comment