Installing ESXi on Whitebox (“Un-supported hardware”)
VMware likes to work with Dell, HP and other major enterprise players to sell VMWare Certified hardware that will always work with ESX. However, we don’t have the money for an overpriced server. We can build our own and use some hacks to install any unsupported hardware. In my case the DQ67SW motherboard from Intel is, for the most part, supported.
When installing ESXi I came across several problems, that I spent several hours to try to figure out. Since ESXi is the free version of vmware and even if I paid a license for it, vmware does not support “unsupported hardware” such as this whitebox. Therefore, you are out on your own digging the roots of the internet for answers and possible work arounds, burning several CD-R’s with custom images and what not.
First and most importantly, ESXi did not like my USB CD/DVD-R drive. The drive in question is an Acer USB portable drive that I always carry. The Motherboard BIOS would see it, it would boot the installer, load the files, but after selecting the hard drive to install vmware to the install would fail with “Unable to find image” error.
Initially I thought the problem was in the .iso image that I created to detect the Intel Gigabit 82579LM Network Card. Working with several methods, even resourcing to:
- Bootable USB ESXi installer (4gb thumb drive)
- USB DVD/CD
- IP-KVM Virtual Media (mounts as a USB virtual device)
All of those methods of install, mounting .iso failed miserably and I spent a considerable amount of time thinking it was a mistake made on the customization of the image. I was wrong, in the darkest hour when I was about to call it a day I thought about using my ATAPI IDE/SATA DVD Rom. I pulled out my very first ESXi image with a custom oem.tgz, it booted, installed without a hitch and the network drivers were seen! — Success
When using the DQ67SW B3 motherboard, I recommend you upgrade to the latest BIOS in order to allow ESXi to be properly detected and for VMdirectPath to work. Upgrading the BIOS was painful, if you have done it before.
VMDirectPath — Obtaining Real World Performance in ZFS
My previous implementation of ZFS was basically a quick hack of Ubuntu with a mix of ZFS. I was able to get the benefits of ZFS with a hefty fee on my performance, for example with four hard drives in a RAID-Z array a file copy from the host Ubuntu to the ZFS filesystem would max out at around 30 MB/s – which is slower than a single platter hard drive which usually average 90MB/s when benchmarking.
ESXi and VMDirectPath provided the solution. I could use my LSI controller and map it to a Guest machine, running either Solaris or the like and resolve my performance issues. The motherboard and CPU needed to have the features of VT-d and my previous motherboard did not, the added bonus of spending the extra money to upgrade is that I will be saving money on electricity due to the power efficiency of sandy bridge as I previously described.
Will post my results soon. More to come.
Pingback: My first post in 5 years. I’m alive and well | deSantolo.com
Pingback: Virtualization hypervisor and containers all in one | deSantolo.com