XenServer Creedence Alpha 2 Released
// Latest blog entries
We're pleased to announce that XenServer Creedence Alpha 2 has been released. Alpha 2 builds on the capabilities seen in Alpha 1, and we're interested your feedback on this release. With Alpha 1, we were primarily interested in receiving basic feedback on the stability of the code, with Alpha 2 we're interested in feedback not only on basic operations, but also storage performance.
The following functional enhancements are contained in Alpha 2.
- Storage read caching. Boot storm conditions in environments using common templates can create unnecessary IO on shared storage systems. Storage read caching uses free dom0 memory to cache common read IO and reduce the impact of boot storms on storage networks and NAS devices.
- DM Multipath storage support. For users of legacy MPP-RDAC, this functionality has been deprecated in XenServer Creedence following storage industry practices. If you are still using MPP-RDAC with XenServer 6.2 or prior, please enter an incident in https://bugs.xenserver.org to record your usage such that we can develop appropriate guidance.
- Support for Ubuntu 14.04 and CentOS 5.10 as guest operating systems
The following performance improvements were observed with Alpha 2 compared to Alpha 1, but we'd like to hear your experiences.
- GRO enabled physical network to guest network performance improved by 65%
- Aggregate network throughput improved by 50%
- Disk IO throughput improved by 100%
While these improvements are rather impressive, we do need to be aware this is alpha code. What this means in practice is that when we start looking at overall scalability the true performance numbers could go down a bit to ensure stable operations. That being said, if you have performance issues with this alpha we want to hear about them. Please also look to this blog space for updates from our performance engineering team detailing how some of these improvements were measured.
Please do download XenServer Creedence Alpha 2, and provide your feedback in our incident database.
Shared via my feedly reader
Sent from my iPhone
Post a Comment