Hardware-assisted virtualization on x86/x86-64 CPUs
x86 virtualization is the manipulation of hardware-assisted virtualization capabilities on an x86/x86-64 CPU. In the belated 1990s x86 virtualization was achieved by complex software techniques, necessary to compensate for the processor ‘s miss of hardware-assisted virtualization capabilities while attaining reasonable performance. In 2005 and 2006, both Intel ( VT-x ) and AMD ( AMD-V ) introduced limited hardware virtualization digest that allowed simpler virtualization software but offered very few speed benefits. [ 1 ] Greater hardware confirm, which allowed hearty travel rapidly improvements, came with later processor models .

Software-based virtualization.

The follow discussion focuses only on virtualization of the x86 computer architecture protected mode.

In protected mode the operate on system kernel runs at a higher privilege such as ring 0, and applications at a lower privilege such as ring 3. [ citation needed ] In software-based virtualization, a host OS has direct access to hardware while the guest osmium have limited access to hardware, equitable like any other application of the host OS. One approach path used in x86 software-based virtualization to overcome this limitation is called ring deprivileging, which involves running the guest o at a ring higher ( lesser privileged ) than 0. [ 2 ] Three techniques made virtualization of protected mode possible :
These techniques incur some operation overhead due to lack of MMU virtualization corroborate, as compared to a VM running on a natively virtualizable computer architecture such as the IBM System/370. [ 4 ] : 10 [ 9 ] : 17 and 21 On traditional mainframes, the classical type 1 hypervisor was self-standing and did not depend on any operate system or run any user applications itself. In contrast, the beginning x86 virtualization products were aimed at workstation computers, and ran a node OS inside a host OS by embedding the hypervisor in a kernel module that ran under the host OS ( type 2 hypervisor ). [ 8 ] There has been some controversy whether the x86 architecture with no hardware aid is virtualizable as described by Popek and Goldberg. VMware researchers pointed out in a 2006 ASPLOS newspaper that the above techniques made the x86 platform virtualizable in the sense of meeting the three criteria of Popek and Goldberg, albeit not by the classical trap-and-emulate proficiency. [ 4 ] : 2–3 A different route was taken by other systems like Denali, L4, and Xen, known as paravirtualization, which involves porting operating systems to run on the resulting virtual machine, which does not implement the parts of the actual x86 direction place that are hard to virtualize. The paravirtualized I/O has significant performance benefits as demonstrated in the original SOSP ’03 Xen paper. [ 10 ] The initial adaptation of x86-64 ( AMD64 ) did not allow for a software-only wide virtualization due to the lack of cleavage digest in long mode, which made the protective covering of the hypervisor ‘s memory impossible, in particular, the auspices of the trap coach that runs in the guest kernel address space. [ 11 ] [ 12 ] : 11 and 20 Revision D and late 64-bit AMD processors ( as a rule of flick, those manufactured in 90 nm or less ) added basic confirm for cleavage in long mode, making it possible to run 64-bit guests in 64-bit hosts via binary translation. Intel did not add division corroborate to its x86-64 execution ( Intel 64 ), making 64-bit software-only virtualization impossible on Intel CPUs, but Intel VT-x documentation makes 64-bit hardware assisted virtualization possible on the Intel platform. [ 13 ] [ 14 ] : 4 On some platforms, it is potential to run a 64-bit guest on a 32-bit master of ceremonies OS if the underlying processor is 64-bit and supports the necessity virtualization extensions .

Hardware-assisted virtualization.

In 2005 and 2006, Intel and AMD ( working independently ) created newly processor extensions to the x86 computer architecture. The first generation of x86 hardware virtualization addressed the publish of inside instructions. The topic of low performance of virtualized system memory was addressed with MMU virtualization that was added to the chipset later .

central processing whole.

virtual 8086 manner.

Based on irritating experiences with the 80286 protected modality, which by itself was not suitable enough to run coincident DOS applications well, Intel introduced the virtual 8086 manner in their 80386 chip, which offered virtualized 8086 processors on the 386 and later chips. Hardware hold for virtualizing the protected mode itself, however, became available 20 years later. [ 15 ]

AMD virtualization ( AMD-V ).

AMD developed its first coevals virtualization extensions under the code mention “ Pacifica ”, and initially published them as AMD Secure Virtual Machine ( SVM ), [ 16 ] but late marketed them under the trademark AMD Virtualization, abbreviated AMD-V. On May 23, 2006, AMD released the Athlon 64 ( “ Orleans ” ), the Athlon 64 X2 ( “ Windsor ” ) and the Athlon 64 FX ( “ Windsor ” ) as the first AMD processors to support this engineering. AMD-V capability besides features on the Athlon 64 and Athlon 64 X2 family of processors with revisions “ F ” or “ G ” on socket AM2, Turion 64 X2, and Opteron 2nd coevals [ 17 ] and third-generation, [ 18 ] Phenom and Phenom II processors. The APU Fusion processors support AMD-V. AMD-V is not supported by any Socket 939 processors. The lone Sempron processors which support it are APUs and Huron, Regor, Sargas desktop CPUs. AMD Opteron CPUs beginning with the Family 0x10 Barcelona line, and Phenom II CPUs, support a second generation hardware virtualization technology called Rapid Virtualization Indexing ( once known as Nested Page Tables during its development ), late adopted by Intel as Extended Page Tables ( EPT ). As of 2019, all Zen -based AMD processors support AMD-V. The CPU flag for AMD-V is “ svm ”. This may be checked in BSD derivatives via dmesg or sysctl and in Linux via /proc/cpuinfo. [ 19 ] Instructions in AMD-V include VMRUN, VMLOAD, VMSAVE, CLGI, VMMCALL, INVLPGA, SKINIT, and STGI. With some motherboards, users must enable AMD SVM feature in the BIOS apparatus before applications can make use of it. [ 20 ]

Intel virtualization ( VT-x ).

“ Intel VT-x ” redirects here. For the Itanium virtualization extensions, see Intel VT-i previously codenamed “ Vanderpool ”, VT-x represents Intel ‘s engineering for virtualization on the x86 platform. On November 13, 2005, Intel released two models of Pentium 4 ( Model 662 and 672 ) as the first Intel processors to support VT-x. The CPU flag for VT-x capability is “ vmx ” ; in Linux, this can be checked via /proc/cpuinfo, or in macOS via sysctl machdep.cpu.features. [ 19 ]

“ VMX ” stands for Virtual Machine Extensions, which adds 13 newfangled instructions : VMPTRLD, VMPTRST, VMCLEAR, VMREAD, VMWRITE, VMCALL, VMLAUNCH, VMRESUME, VMXOFF, VMXON, INVEPT, INVVPID, and VMFUNC. [ 21 ] These instructions permit entering and exiting a virtual execution mood where the guest o perceives itself as running with full privilege ( ring 0 ), but the host OS remains protected. As of 2015, about all newer waiter, desktop and mobile Intel processors support VT-x, with some of the Intel Atom processors as the primary exception. [ 22 ] With some motherboards, users must enable Intel ‘s VT-x feature in the BIOS setup before applications can make consumption of it. [ 23 ] Intel started to include Extended Page Tables ( EPT ), [ 24 ] a technology for page-table virtualization, [ 25 ] since the Nehalem computer architecture, [ 26 ] [ 27 ] released in 2008. In 2010, Westmere added support for launching the legitimate processor directly in actual mode – a feature called “ nonsensitive guest ”, which requires EPT to work. [ 28 ] [ 29 ] Since the Haswell microarchitecture ( announced in 2013 ), Intel started to include VMCS shadowing as a technology that accelerates nest virtualization of VMMs. [ 30 ] The virtual machine control structure ( VMCS ) is a data structure in memory that exists precisely once per VM, while it is managed by the VMM. With every change of the execution context between different VMs, the VMCS is restored for the current VM, defining the department of state of the VM ‘s virtual central processing unit. [ 31 ] american samoa soon as more than one VMM or nested VMMs are used, a problem appears in a direction similar to what required shadow foliate table management to be invented, as described above. In such cases, VMCS needs to be shadowed multiple times ( in case of nesting ) and partially implemented in software in case there is no hardware support by the processor. To make shadow VMCS handling more efficient, Intel implemented hardware support for VMCS shadowing. [ 32 ]

VIA virtualization ( VIA VT ).

VIA Nano 3000 Series Processors and higher subscribe VIA VT virtualization engineering compatible with Intel VT-x. [ 33 ] EPT is deliver in Zhaoxin ZX-C, a descendant of VIA QuadCore-E & Eden X4 exchangeable to Nano C4350AL. [ 34 ]

Interrupt virtualization ( AMD AVIC and Intel APICv ).

In 2012, AMD announced their Advanced Virtual Interrupt Controller ( AVIC ) targeting interrupt overhead decrease in virtualization environments. [ 35 ] This technology, as announced, does not support x2APIC. [ 36 ] In 2016, AVIC is available on the AMD family 15h models 6Xh ( Carrizo ) processors and newer. [ 37 ] besides in 2012, Intel announced a similar technology for interrupt and APIC virtualization, which did not have a brand identify at its announcement time. [ 38 ] Later, it was branded as APIC virtualization ( APICv ) [ 39 ] and it became commercially available in the Ivy Bridge EP series of Intel CPUs, which is sold as Xeon E5-26xx v2 ( launched in late 2013 ) and as Xeon E5-46xx v2 ( launched in early 2014 ). [ 40 ]

Graphics process unit of measurement.

Graphics virtualization is not separate of the x86 architecture. Intel Graphics Virtualization Technology ( GVT ) provides graphics virtualization as share of more late Gen graphics architectures. Although AMD APUs implement the x86-64 instruction set, they implement AMD ‘s own graphics architectures ( TeraScale, GCN and RDNA ) which do not support graphics virtualization. Larrabee was the only graphics microarchitecture based on x86, but it probable did not include subscribe for graphics virtualization .


memory and I/O virtualization is performed by the chipset. [ 41 ] Typically these features must be enabled by the BIOS, which must be able to support them and besides be set to use them .

I/O MMU virtualization ( AMD-Vi and Intel VT-d ).

An input/output memory management whole ( IOMMU ) allows guest virtual machines to directly use peripheral devices, such as Ethernet, accelerated graphics cards, and hard-drive controllers, through DMA and interrupt remapping. This is sometimes called PCI passthrough. [ 42 ] An IOMMU besides allows operating systems to eliminate bounce buffers needed to allow themselves to communicate with peripheral devices whose memory address spaces are smaller than the engage system ‘s memory address space, by using memory address translation. At the same time, an IOMMU besides allows operating systems and hypervisors to prevent balmy or malicious hardware from compromising memory security. Both AMD and Intel have released their IOMMU specifications :

  • AMD’s I/O Virtualization Technology, “AMD-Vi”, originally called “IOMMU”[43]
  • Intel’s “Virtualization Technology for Directed I/O” (VT-d),[44] included in most high-end (but not all) newer Intel processors since the Core 2 architecture.[45]

In addition to the CPU support, both motherboard chipset and arrangement firmware ( BIOS or UEFI ) need to fully support the IOMMU I/O virtualization functionality for it to be functional. only the PCI or PCI Express devices supporting function level reset ( FLR ) can be virtualized this means, as it is required for reassigning diverse device functions between virtual machines. [ 46 ] [ 47 ] If a device to be assigned does not support Message Signaled Interrupts ( MSI ), it must not contribution interrupt lines with other devices for the assignment to be possible. [ 48 ] All conventional PCI devices routed behind a PCI/ PCI-X -to-PCI Express bridge can be assigned to a guest virtual machine only all at once ; PCI Express devices have no such restriction .

Network virtualization ( VT-c ).

  • Intel’s “Virtualization Technology for Connectivity” (VT-c).[49]

PCI-SIG Single Root I/O Virtualization ( SR-IOV ).

PCI-SIG Single Root I/O Virtualization (SR-IOV) provides a set of general ( non-x86 specific ) I/O virtualization methods based on PCI Express ( PCIe ) native hardware, as standardized by PCI-SIG : [ 50 ]

  • Address translation services (ATS) supports native IOV across PCI Express via address translation. It requires support for new transactions to configure such translations.
  • Single-root IOV (SR-IOV or SRIOV) supports native IOV in existing single-root complex PCI Express topologies. It requires support for new device capabilities to configure multiple virtualized configuration spaces.[51]
  • Multi-root IOV (MR-IOV) supports native IOV in new topologies (for example, blade servers) by building on SR-IOV to provide multiple root complexes which share a common PCI Express hierarchy.

In SR-IOV, the most common of these, a horde VMM configures supported devices to create and allocate virtual “ shadows ” of their configuration spaces so that virtual machine guests can immediately configure and access such “ darkness ” device resources. [ 52 ] With SR-IOV enabled, virtualized network interfaces are directly accessible to the guests, [ 53 ] avoiding involvement of the VMM and resulting in high overall performance ; [ 51 ] for example, SR-IOV achieves over 95 % of the bare metallic element network bandwidth in NASA ‘s virtualized datacenter [ 54 ] and in the Amazon Public Cloud. [ 55 ] [ 56 ]

See besides.


beginning : https://themedipia.com
Category : Website hosting

Leave a Reply

Your email address will not be published.