Cheap home lab build: virtualization server, GNS3 for CCNA-CCNP rack, and ZFS
Here I build a virtualization server which can be rack mounted inside an IKEA Lack table converted to host a Cisco CCNA-CCNP home lab (with GNS3 for Cisco router emulation) and virtualization lab for studying. This video is a shorter, updated and compressed version of a video which I uploaded half a year ago about building a cheap virtualization host for home use, i.e. not involving expensive industry grade server components. The VT-x (AMD-v) virtualization extension and VT-d (IOMMU) feature for direct PCI-e device direct bypass/passthrough to VMs are discussed. I also focus on features which you will miss if you do not invest into a server-grade motherboard. In particular, SR-IOv for single-root I/O virtualization is only supported on newer server-grade motherboards, which are still very expensive. The HP NC364T quad gigabit network cards are a good choice to start your labs. However, these cards do not support VMD-q, which is nowadays becoming more frequently used. The cards work fine for GNS3 as breakout cards to real Cisco 3750 switches. Other quirk of these quad cards is, that they have a PCI-e switcher chip on the board, so you need to bypass both NICs from an island, and you need to turn on "unsafe-bypass" support in your hypervisor. For storage, I had very good success with ZoL (ZFS on Linux), where mechanical disks are used in RAIDZ, and an SSD partitioned in two. The first SSD partition is used for ZIL (ZFS intention log) and the larger partition for L2ARC (Level 2 Adjustable Replacement Cache) as a read cache, which really helps when you have little RAM in your server (mine so far only has 16GB of RAM). Thus, ZoL and ZFS turned out to be so good, that I would no longer spend money for a RAID controller card. Furthermore, the motherboard I use has plenty of native SATA ports, so I was able to save money not having to buy an HBA (Host Bus Adapter for ZFS). Running ZFS without ECC RAM for home lab is totally fine, since the data has no real value. The hands on experience with a home lab is PRICELESS!!!
I have successfully tested this rig working with Linux KVM (Kernel based Virtual Machine) hypervisor and also with ESXi 5.5. With ESXi 5.5 the only issue I have bumped into was, that the driver for the Realtek onboard NIC had to be baked into the installation ISO.
COMPONENTS I HAVE USED:
==========================
CPU: AMD FX8350 (has AMD-v, IOMMU and ECC support, and 8 corelets)
CPU cooler: Alpenföhn Brocken with 12cm fan (very quiet, fits into the small case)
MoBo: Gigabyte 990FXA-UD5 Rev. 3 (IOMMU support and AMD-v work fine)
MoBo also has lots of PCI-e lanes routed to the PCI-e slots, so VM passthrough is working
MoBo does not support SR-IOv, so you cannot share a PCI-e card among multiple VMs
RAM: 2* 8GB of Crucial Ballistix Sport DDR3 (no ECC, should extend it to 32GB)
Video: old 32-bit PCI VGA (Gigabyte MoBo does not boot without video!!!)
Network: 3* Quad Gigabit HP NC364T + onboard Realtek GbE LAN
Storage: 32GB USB stick for hypervisor + 4 mechanical disks in RAIDZ-1
Storage read/write cache: 128GB SSD partitioned for ZIL and for L2ARC
Case: cheap ATX chassis, bought used, converted to be mountable in an IKEA "Lack" table rack
PSU: Corsair VS550
IPMI substitute: native serial console
InfiniBand: still hunting for cheap Mellanox full-height QDR cards with VPI and cables