Friday, June 6, 2014

The reality of a 64 bit dom0 [feedly]



----
The reality of a 64 bit dom0
// Latest blog entries

One of the key improvements in XenServer Creedence is the introduction of a 64 bit control domain (dom0). This is something I've heard requests for over the past several years, and while there are many reasons to move to a 64 bit dom0, there are some equally good reasons for us to have waited this long to make the change. It's also important to distinguish between a 64 bit hypervisor, which XenServer has always used, and a 64 bit control domain which is the new bit.

Isn't XenServer 64 bit bare-metal virtualization?

The good news for people who think XenServer is 64 bit bare metal virtualization is they're not wrong. All versions of XenServer use the Xen Project hypervisor, and all versions of XenServer have configured that hypervisor to run in 64 bit mode. In a Xen Project based environment, the first thing the hypervisor does is load its control domain (dom0). For all versions of XenServer prior to Creedence, that control domain was 32 bit.

The goodness of 64 bit

A 64 bit Linux dom0 allows us to remove a number of bottlenecks which were arbitrarily present with a 32bit Linux dom0.

Goodbye low memory

One of the biggest limiting factors with a 32bit Linux dom0 is the concept of "low memory". On 32 bit computers, the maximum directly addressable memory is 4GB. There are a variety of extensions available to address beyond the 4GB limit, but they all come with a performance and complexity cost. Further, as 32bit Linux evolved, a concept of "Low" memory and "High" memory was created with most everything related to the kernel using low memory and userspace processes running in high memory. Since we're talking about kernel operations consuming low memory, that also means that any kernel memory maps and kernel drivers also consume low memory and you can see how low memory quickly becomes a precious commodity. It also can be increasingly consumed the more memory available to a 32bit Linux dom0. In a typical XenServer installation this value is only 752MB, and with the move to a 64bit dom0 will soon be a thing of the past.

Improved driver stability

In a 32bit BIOS, MMIO regions are always placed between the 1GB and 3GB physical address space, and with a 64bit BIOS they are always placed at the top of the physical memory. If an MMIO hole is created dom0 can choose to re-map that memory so that it is now shadowed in virtual address space. The kernel must map a roughly equal amount of memory to the size of the MMIO holes, which in a 32bit kernel must be done in low memory. Many drivers map kernel memory to MMIO holes on demand, resulting in boot time success but potential instability in the driver if there is insufficient low memory to satisfy the dynamic request for a MMIO hole remap. Additionally, while XenServer currently supports PAE, and can address more than 4GB, if the driver insists on having its memory allocated in 32bit physical address space, it will fail.

Support for more modern drivers

Hardware vendors want to ensure the broadest adoption for their devices, and with modern computers shipping with 64bit processors and memory configurations well in excess of 4GB, the majority of those drivers have been authored for 64bit operating systems, and 64bit Linux is no different. Moving to a 64bit dom0 gives us an opportunity to incorporate those newer devices and potentially have fewer restrictions on the quantity of devices in a system due to kernel memory usage.

Improved computational performance

During the initial wave of operating systems moving from 32bit to 64bit configurations, one of the often cited benefits was the computational improvements offered from such a migration. We fully intend for dom0 to take advantage of general purpose improvements offered from the processor no longer needing to operate in what is today more of a legacy configuration. One of the most immediate benefits is with a 64bit dom0, we can use 64bit compiler settings and take advantage of modern processor extensions. Additionally, there are several performance concessions (such as the quantity of event channels) and code paths which can be optimized when compiled for a 64bit system versus a 32bit system.

Challenges do exist, though

While there are clear benefits to a 64 bit migration, the move isn't without its set of issues as well. The most significant issue relates to XenServer pool communications. In a XenServer pool, all hosts have historically been required to be at the same version and patch level except during the process of upgrading the pool. This restriction serves to ensure that any data being passed between hosts, and configuration information being replicated in xenstore and all operations are consistent. With the introduction of Storage XenMotion in XenServer 6.1, this requirement was also extended to cross pool operations when a vdi is being migrated.

 

In essence you can think of the version requirement as being the insurance against bad things happening to your precious VM cargo. Unfortunately, that requirement has an assumption of dom0 having the same architecture and our migration from 32bit to 64bit complicates things. This is due to the various data structures used in pool operations having been designed with a consistent view of a world based on a 32bit operating system. This view of the world is directly challenged with a pool consisting of both 32bit and 64bit XenServer hosts as would be present during an upgrade. It's also challenged during cross pool storage live migration if one pool is 32bit while the second is 64bit. We are working to resolve this problem at least for the 32bit to 64bit upgrade, but it will be something we're going to want very active participation from our user community to test once we have completed our implementation efforts. After all we do subscribe to the notion that your virtual machines are pets and not cattle; regardless of the scale of your virtualized infrastructure.     


----

Shared via my feedly reader




Sent from my iPad

No comments:

Post a Comment