- Intel VT-x or AMD-V is required for running “Nested Virtualization” which supports nested 32-bit VMs
- Intel EPT or AMD RVI is required for running nested 64-bit VMs.
A promptly room to verify whether your CPU truly supports both Intel-VT+EPT or AMD-V+RVI, you can paste the following into a browser :
hypertext transfer protocol : // [ your-esxi-host-ip-address ] /mob/ ? moid=ha-host & doPath=capability
You will need to login with your root credentials and then look for the “ nestedHVSupported “ property and if it states false, it means you possibly able to install nest ESXi or early hypervisor, but you will not be able to run nest 64-bit VMs, only 32-bit VMs, assuming you have either Intel-VT or AMD-V support on your CPUs.
For more details, take a expression at this article for trouble-shoot : Having Difficulties Enabling Nested ESXi in vSphere 5.1 ?
Disclaimer: This is not formally supported by VMware, please use at your own risk .
There are some changes with Nested Virtualization in vSphere 5.1 besides formally known as VHV ( Virtual Hardware-Assisted Virtualization ). If you are using vSphere 5.0 to run Nested ESXi or other nest Hypervisors, then please take a look at the instructions in this article. With vSphere 5.1, there have been a few minor changes to enable VHV .
- The new Virtual Hardware 9 compatibility will be required when creating your nested ESXi VM, Virtual Hardware 8 will not work if you are running ESXi 5.1 on your physical host. You will still need to enable promiscuous mode on the portgroup that will be used for your nested ESXi VM for network connectivity.
- vhv.allow = “true” is no longer valid for ESXi 5.1 to enable VHV. A new parameter has been introduced called vhv.enable = “true” that is now defined on a per VM basis to provide finer granularity of VHV support. This also allows for better portability between VMware’s hosted products such as VMware Fusion and Workstation as they also support the vhv.enable parameter.
- You can now enable VHV on a per VM basis and using the new vSphere Web Client which basically adds the vhv.enable = “true” parameter to the VM’s .VMX configuration file.
Note: You can run a cuddle ESXi 5.1 VM on top of a physical ESXi 5.0 host, just follow the instructions here .
Enabling VHV (Virtual Hardware-Assisted Virtualization)
Step 1 – Create a new Virtual Hardware 9 Virtual Machine using the modern vSphere Web Client that ‘s available with vCenter Server 5.1 .
Step 2 – choice “ Linux “ as the guestOS Family and “ Other Linux (64-bit) “ as the guestOS Version .
Step 3 – During the custom-make hardware ace, expand the “ CPU ” section and select “ Hardware Virtualization “ box to enable VHV .
Note: If this box is grayed out, it means that your physical central processing unit does not supported Intel VT-x + EPT or AMD-V + RVI which is required to run VHV OR that you are not using Virtual Hardware 9. If your CPU only supports Intel-VT or AMD-V, then you can still install nest ESXi, but you will only be able to run nest 32-bit VMs and not nested 64-bit VMs .
Step 4 – It is still recommended that you change the guestOS Version to VMware ESXi 5.x after you have created the VM shell, as there are some special settings that are applied automatically. unfortunately with the new vSphere Web Client, you will not be able to modify the guestOS after initiation, so you will need to use the C # Client OR manually go into the .VMX and update guestOS = “vmkernel5”
Read more: Medical Website Hosting | RemedyConnect
now you are quick to install cuddle ESXi VMs ampere well as run nested 64-bit VMs within .
If you have followed my previous article about How to Enable Support for Nested 64bit & Hyper-V VMs in vSphere 5 you may recall a diagram about the levels of “ Inception ” that can be performed with nest ESXi. That is, the number of times you could nest ESXi and still have it be in a “ functional ” state. With vSphere 5.0, the limit that I was able to push was 2 levels of nest ESXi. With latest secrete of vSphere 5.1, I have been able to push that terminus ad quem now to an extraordinary 3 levels of origin !
You might ask why would person want to do this … well I do n’t have a effective solution other than … because I can ? 😉 VHV is one of the coolest “ unsupported “ feature of speech in my books and I ‘m glad it is working beyond what it was designed for .
For proper network connectivity, besides ensure that either your criterion vSwitch or Distributed virtual Switch has both easy mode and forged impart enabled either globally on the portgroup or distributed portgroup your nest ESXi hosts are connected to .
Nesting “Other” Hypervisors
For those of you who feel inclined to run other hypervisors such as Hyper-V, you can do indeed with latest release of ESXi 5.1. The work if very straightaway ahead just like running nested ESXi host .
Step 1 – Create a Virtual Hardware 9 VM and select the appropriate guestOS. In this exemplar, I selected Windows Server 2012 (64-bit) as the guestOS version .
Step 2 – enable VHV under the CPU section if you wish to create and run nested 64-bit VMs under Hyper-V
Step 3 – You will need to add one extra .vmx parameter which tells the underlie guestOS ( Hyper-V ) that it is not running as a virtual node which in fact it very is. The argument is hypervisor.cpuid.v0 = FALSE
plowshare this …