Skip to main content

Happy Friday everyone!

 

I’m back for another Fun Friday question: What do you use to lab/test/play with technology?

 

Whether you’ve just got a preferred cloud provider, live on the “hands on labs” of a particular vendor, or have a substation in your garden powering a giant datacentre in your basement, I wanna hear it!

 

I’ve currently got a 48 Port 1Gbps PoE switch with 6x 10Gbps uplinks, 2x HPE quad core servers with 32GB RAM each running ESXi and if I need to overflow I’ve got VMware workstation on my quad core 32GB PC, all sat under my desk! (#toasty for the winter)

Just to virtualize in my homelab: i5-6500 wks, 24gb of RAM, 3 ssd for OS partition and 3 WD Red plus for storage. That’s not to much, but enough to test :sweat_smile:


I am in process to build up a private lab at the moment.

  • Got one Dell R730 Server with lot of RAM. Disks are missing up to now… I will try to build a one node ESXi cluster on this machine. But will have to check if there are some test licenses exist for this, too….
  • Network is 1 Gb only, I will see if I get this up to 10Gb or now. For test it is probably enough.
  • For Veeam I have a PC with 32 GB RAM and 5 TB disk.
  • There is a little tape library somewhere under some other boxes… but for this I need some FC HBA...

We will see what Santa brings to me in the future… :sunglasses:


I have the following in my homelab -

  • Four Intel NUC Skull Canyons with i7 6670HQ 4-core processors
  • Two of the NUCs have 64GB of RAM and other two are 32GB (planning to upgrade them to 64GB)
  • Each NUC has 1 on-board 1GB and 1 x USB 1GB NICs - so two NICs using the VMware USB fling
  • 16-port TP-Link Gigabit switch (plans to upgrade to Mikrotik CSS326-24G-2S+RM 24 port Gigabit)
  • 8-port TP-Link switch in basement for all house connections to Bell modem (may upgrade this one to Mikrotik also)
  • Bell 1.5GB Fiber internet connection
  • VMware vCenter 7u3, ESXi 7u2 (required for USB fling until U3 fixed)
  • Synology DS920+ NAS - 4 x 8TB, 2 x 256GB SSD cache, 2 x 1GB LAN (bonded) -- runs iSCSI/NFS for VMware VMs as well as system backups for vCenter and home laptops
  • Synology DX517 expansion unit - 5 disks -- waiting on disks from Synology before connecting to DS920+

Everything runs pretty well but I have to keep an eye on resources in VMware as running too much then I run out of resources and why I need to ugprade two of the NUCs to 64GB to match the other two.


I have the following in my homelab -

  • Four Intel NUC Skull Canyons with i7 6670HQ 4-core processors
  • Two of the NUCs have 64GB of RAM and other two are 32GB (planning to upgrade them to 64GB)
  • Each NUC has 1 on-board 1GB and 1 x USB 1GB NICs - so two NICs using the VMware USB fling
  • 16-port TP-Link Gigabit switch (plans to upgrade to Mikrotik CSS326-24G-2S+RM 24 port Gigabit)
  • 8-port TP-Link switch in basement for all house connections to Bell modem (may upgrade this one to Mikrotik also)
  • Bell 1.5GB Fiber internet connection
  • VMware vCenter 7u3, ESXi 7u2 (required for USB fling until U3 fixed)
  • Synology DS920+ NAS - 4 x 8TB, 2 x 256GB SSD cache, 2 x 1GB LAN (bonded) -- runs iSCSI/NFS for VMware VMs as well as system backups for vCenter and home laptops
  • Synology DX517 expansion unit - 5 disks -- waiting on disks from Synology before connecting to DS920+

Everything runs pretty well but I have to keep an eye on resources in VMware as running too much then I run out of resources and why I need to ugprade two of the NUCs to 64GB to match the other two.

Wow…. now I am a little bit jealous…. :joy::joy::joy:

I have looked at the NUCs some times in the past, but I think they are quite expensive.


I have the following in my homelab -

  • Four Intel NUC Skull Canyons with i7 6670HQ 4-core processors
  • Two of the NUCs have 64GB of RAM and other two are 32GB (planning to upgrade them to 64GB)
  • Each NUC has 1 on-board 1GB and 1 x USB 1GB NICs - so two NICs using the VMware USB fling
  • 16-port TP-Link Gigabit switch (plans to upgrade to Mikrotik CSS326-24G-2S+RM 24 port Gigabit)
  • 8-port TP-Link switch in basement for all house connections to Bell modem (may upgrade this one to Mikrotik also)
  • Bell 1.5GB Fiber internet connection
  • VMware vCenter 7u3, ESXi 7u2 (required for USB fling until U3 fixed)
  • Synology DS920+ NAS - 4 x 8TB, 2 x 256GB SSD cache, 2 x 1GB LAN (bonded) -- runs iSCSI/NFS for VMware VMs as well as system backups for vCenter and home laptops
  • Synology DX517 expansion unit - 5 disks -- waiting on disks from Synology before connecting to DS920+

Everything runs pretty well but I have to keep an eye on resources in VMware as running too much then I run out of resources and why I need to ugprade two of the NUCs to 64GB to match the other two.

Wow…. now I am a little bit jealous…. :joy::joy::joy:

I have looked at the NUCs some times in the past, but I think they are quite expensive.

I was lucky to get these as someone in the Vanguard program was getting rid of them and sold me all four really cheap.  :grin:


I have the following in my homelab -

  • Four Intel NUC Skull Canyons with i7 6670HQ 4-core processors
  • Two of the NUCs have 64GB of RAM and other two are 32GB (planning to upgrade them to 64GB)
  • Each NUC has 1 on-board 1GB and 1 x USB 1GB NICs - so two NICs using the VMware USB fling
  • 16-port TP-Link Gigabit switch (plans to upgrade to Mikrotik CSS326-24G-2S+RM 24 port Gigabit)
  • 8-port TP-Link switch in basement for all house connections to Bell modem (may upgrade this one to Mikrotik also)
  • Bell 1.5GB Fiber internet connection
  • VMware vCenter 7u3, ESXi 7u2 (required for USB fling until U3 fixed)
  • Synology DS920+ NAS - 4 x 8TB, 2 x 256GB SSD cache, 2 x 1GB LAN (bonded) -- runs iSCSI/NFS for VMware VMs as well as system backups for vCenter and home laptops
  • Synology DX517 expansion unit - 5 disks -- waiting on disks from Synology before connecting to DS920+

Everything runs pretty well but I have to keep an eye on resources in VMware as running too much then I run out of resources and why I need to ugprade two of the NUCs to 64GB to match the other two.

Wow…. now I am a little bit jealous…. :joy::joy::joy:

I have looked at the NUCs some times in the past, but I think they are quite expensive.

I was lucky to get these as someone in the Vanguard program was getting rid of them and sold me all four really cheap.  :grin:

Ok… :sunglasses:  Oh my, if I were a Vanguard….


I have two Lenovo TS-150 with ESXi + vCenter running on this environment:
 

 

I need to upgrade them with more RAM and SSD, but I like it.

I’m looking for a NAS to do a storage between these 2 servers.

 

P.S.: I want to see some pics about your homelabs.


I used to have 2 supermicro E300 9D, 128GB of ram and 2 ssd 1TB on each server, plus a procurve gigabit switch and a synology ds416 slim.

I’ve got rid of all the hw, and right now I’m going virtual to my home lab into my MacBook Pro and my HPE Zbook.

im looking for an alternative, maybe some sort of cloud provider or so.

”missing the datacenter sound in my lab” but I’m looking for better performance, flexibility and less electricity consumption.


I used to have 2 supermicro E300 9D, 128GB of ram and 2 ssd 1TB on each server, plus a procurve gigabit switch and a synology ds416 slim.

I’ve got rid of all the hw, and right now I’m going virtual to my home lab into my MacBook Pro and my HPE Zbook.

im looking for an alternative, maybe some sort of cloud provider or so.

”missing the datacenter sound in my lab” but I’m looking for better performance, flexibility and less electricity consumption.

Thanks everyone for sharing and @HunterLF you make a great point here. I love working with HPE/Dell equipment so you get used to out of band management etc but when I upgrade my PC I’m gonna put a decently spec’d Threadripper or Xeon CPU in there with ample RAM so I don’t have a mini datacentre and I’ll nest ESXi within it


My lab is running in Dell EMC E560 appliance (3 nodes). I have deployed Veeam VDRO, VMware SRM, VMware vSphere Replication, VMware vRealize Operations Manager and Zerto into this environment.

 


 @victorwu  It looks pretty cool, but also expensive, aren´t they?
 @MicoolPaul  thanks for the reference, it's always a good idea to have enough power to run a small lab nested, if you need more lab, like a small datacenter, you can always buy some servers and switches, refurbished, new, etc.
Also would be a good idea, in a near future, if someone is interested in buying a new lab, or hardware, to make a group and purchase it together, for better price, specs, and support.


 @victorwu  It looks pretty cool, but also expensive, aren´t they?
 @MicoolPaul  thanks for the reference, it's always a good idea to have enough power to run a small lab nested, if you need more lab, like a small datacenter, you can always buy some servers and switches, refurbished, new, etc.
Also would be a good idea, in a near future, if someone is interested in buying a new lab, or hardware, to make a group and purchase it together, for better price, specs, and support.

@HunterLF This system is my company demo unit which is very good for testing. The price is not expensive.


 @victorwu  It looks pretty cool, but also expensive, aren´t they?
 @MicoolPaul  thanks for the reference, it's always a good idea to have enough power to run a small lab nested, if you need more lab, like a small datacenter, you can always buy some servers and switches, refurbished, new, etc.
Also would be a good idea, in a near future, if someone is interested in buying a new lab, or hardware, to make a group and purchase it together, for better price, specs, and support.

Yep certainly would be, I know in the VMware community William Lam has sorted these bulk buy offerings before.


 @victorwu  It looks pretty cool, but also expensive, aren´t they?
 @MicoolPaul  thanks for the reference, it's always a good idea to have enough power to run a small lab nested, if you need more lab, like a small datacenter, you can always buy some servers and switches, refurbished, new, etc.
Also would be a good idea, in a near future, if someone is interested in buying a new lab, or hardware, to make a group and purchase it together, for better price, specs, and support.

Yep certainly would be, I know in the VMware community William Lam has sorted these bulk buy offerings before.

You're right, the problem is always the same…. international shipping and taxes.

would be awesome to find the way to purchase something, maybe a huge cloud subscription in order to be able for all of us to share a big cloud lab space!

I know, dreaming is free!! but it would be so cool one day!


 @victorwu  It looks pretty cool, but also expensive, aren´t they?
 @MicoolPaul  thanks for the reference, it's always a good idea to have enough power to run a small lab nested, if you need more lab, like a small datacenter, you can always buy some servers and switches, refurbished, new, etc.
Also would be a good idea, in a near future, if someone is interested in buying a new lab, or hardware, to make a group and purchase it together, for better price, specs, and support.

Yep certainly would be, I know in the VMware community William Lam has sorted these bulk buy offerings before.

You're right, the problem is always the same…. international shipping and taxes.

would be awesome to find the way to purchase something, maybe a huge cloud subscription in order to be able for all of us to share a big cloud lab space!

I know, dreaming is free!! but it would be so cool one day!

Yeah don't think there has been something like this before but it is intriguing for sure.


Because my company runs test- and demo-equipment in internal datacenters, I got a lot of HW to test with :grin: . There are:

  • HPE Hosts (Gen8-10)
  • HPE 3PAR7+8k / Primera
  • somethimes HPE Nimble
  • FC switches

Within these environments I prefer to run nested vSphere configurations to get more test-options.


Because my company runs test- and demo-equipment in internal datacenters, I got a lot of HW to test with :grin: . There are:

  • HPE Hosts (Gen8-10)
  • HPE 3PAR7+8k / Primera
  • somethimes HPE Nimble
  • FC switches

Within these environments I prefer to run nested vSphere configurations to get more test-options.

That’s one fancy lab there!

 

Helping get Veeam set up in the lab at my new place and should have access to a lot of similar goodies! 😁


At the office, I have 6 PowerEdge R610’s running ESXI 6.7 (well, 5, one died) connected to an Equallogic PS6000 array for the compute environment.  Three more R610’s are on the shelf to be added as well as a PS6210E.  Using two Dell PowerConnect 6448 switches in a stack with a Cisco ASA5505.  Have VBR and VDRO (in process) setup on Server 2016 on a HP Z420 running ESXI with a hardware RAID controller and a couple of 4TB and 6TB drives in it for backups.  I’ll likely change over to utilize a Synology RackStation later on for the Veeam repositories.  I’m working on getting ahold of a couple PowerEdge R620’s as well as some R710’s or R720’s.

At home, I have a PowerEdge R610 (to be replaced by a R520) running ESXI 6.7 as well.  Have an Extreme Networks Summit 48-port POE Switch and Ubiquiti Unified Security Gateway Pro for a firewall.  VBR 11 and VBO 5 are running on two Server 2016 VM’s and are backing using a Synology DS218+ for the repositories.


I have the following in my homelab -

  • Four Intel NUC Skull Canyons with i7 6670HQ 4-core processors
  • Two of the NUCs have 64GB of RAM and other two are 32GB (planning to upgrade them to 64GB)
  • Each NUC has 1 on-board 1GB and 1 x USB 1GB NICs - so two NICs using the VMware USB fling
  • 16-port TP-Link Gigabit switch (plans to upgrade to Mikrotik CSS326-24G-2S+RM 24 port Gigabit)
  • 8-port TP-Link switch in basement for all house connections to Bell modem (may upgrade this one to Mikrotik also)
  • Bell 1.5GB Fiber internet connection
  • VMware vCenter 7u3, ESXi 7u2 (required for USB fling until U3 fixed)
  • Synology DS920+ NAS - 4 x 8TB, 2 x 256GB SSD cache, 2 x 1GB LAN (bonded) -- runs iSCSI/NFS for VMware VMs as well as system backups for vCenter and home laptops
  • Synology DX517 expansion unit - 5 disks -- waiting on disks from Synology before connecting to DS920+

Everything runs pretty well but I have to keep an eye on resources in VMware as running too much then I run out of resources and why I need to ugprade two of the NUCs to 64GB to match the other two.

Wow…. now I am a little bit jealous…. :joy::joy::joy:

I have looked at the NUCs some times in the past, but I think they are quite expensive.

They are terribly expensive! But there are great options now. 


I use today work labs, my own home lab and then a couple of cloud platforms myself.


I need to look more into the cloud-lab items, specifically Azure.  I’ve never touched AWS, but need to get more comfortable with Azure before I begin adding more to the list...


Derek - both AWS and Azure allow you to ‘test’ their solutions for a little while for free. Azure used to be 30 days I think...and imo is not enough time. AWS I believe is 3mos? Just FYI. I used both of those options for playing around to get my Azure & AWS Architect certs :)


I need to look more into the cloud-lab items, specifically Azure.  I’ve never touched AWS, but need to get more comfortable with Azure before I begin adding more to the list...

kodekloud has playgrounds for Azure and AWS. I used them once when I needed to check something. It is a subscription for the whole learning site which is mainly focused on Devops so might be overkill unless you are planning to do more of the courses. 


Comment