Tuesday, November 24, 2015

How Elastic Is Your Cloud? Part 2 [feedly]



----
How Elastic Is Your Cloud? Part 2
// Citrix Blogs

In Part 1 of my blog, we looked at how Citrix CloudPlatform can perform well in high-scale environments and how certain configurations can be tuned to achieve desired response time. In this installment, we will look at few more advanced scenarios. Batch VM Deployment Batch VM deployment is quite a common scenario in a cloud […]

----

Shared via my feedly reader


Sent from my iPhone

Adding Linux VDA to a XenDesktop 7.6 Environment [feedly]



----
Adding Linux VDA to a XenDesktop 7.6 Environment
// Citrix Blogs

Introduction As new and existing Citrix XenDesktop customers continue to move and grow their virtual desktop environments to reap the benefits of centralization, mobility, security, and operational efficiency, Citrix has added the Linux Virtual Desktop Agent which integrates with XenApp and XenDesktop and extends the FlexCast Management Architecture (FMA) to enable additional use cases in […]

----

Shared via my feedly reader


Sent from my iPhone

What’s Microsoft’s Cloud Strategy? Find Out @Summit! [feedly]



----
What's Microsoft's Cloud Strategy? Find Out @Summit!
// Citrix Blogs

Have you done an image search for the word "partnership"? The image results turn up pictures of people shaking hands, teams putting together puzzle pieces or building blocks – it gives you the sense that "working in partnership" means working together to build something stronger, to build something great. This is what the Citrix and […]

----

Shared via my feedly reader


Sent from my iPhone

Automate App Approval for Storefront with Octoblu [feedly]



----
Automate App Approval for Storefront with Octoblu
// Citrix Blogs

Have you ever had to fill out an "application request form" on your company's Intranet site or open a "request for application access" ticket in your company's tracking system? Have you ever received an email asking you to log into your company's network to approve an application request from one of your employees only to find that […]

----

Shared via my feedly reader


Sent from my iPhone

Adding Windows 10 to a XenDesktop 7.6 Environment [feedly]



----
Adding Windows 10 to a XenDesktop 7.6 Environment
// Citrix Blogs

Introduction As new and existing Citrix XenDesktop customers consider migrating to Microsoft Windows 10, mobilizing the desktop experience through HDX is key to both virtual and physical deployments.  The Windows 10 excitement has been steadily increasing, and along with it our XenDesktop use cases continue to grow, as well. With the release of XenDesktop 7.6 […]

----

Shared via my feedly reader


Sent from my iPhone

Keys to the Cloud: New Resources for Citrix CloudPlatform [feedly]



----
Keys to the Cloud: New Resources for Citrix CloudPlatform
// Citrix Blogs

In my last article–Keys to the Cloud–I talked about the Blog-a-thon we just finished and how you can best take advantage of it. For us, the Blog-a-thon was a great way to help educate our customers and partners about what's new with our Software Infrastructure products and to spotlight the great work the people here are doing. My last […]

----

Shared via my feedly reader


Sent from my iPhone

Keys to the Cloud: New Citrix Lifecycle Management Resources [feedly]



----
Keys to the Cloud: New Citrix Lifecycle Management Resources
// Citrix Blogs

Check out the latest information resources for Citrix Lifecycle Management and find out who won our Blog-a-thon contest for the best blogs about Citrix data center infrastructure products.

----

Shared via my feedly reader


Sent from my iPhone

Sealing Steps After Updating a vDisk [feedly]



----
Sealing Steps After Updating a vDisk
// Citrix Blogs

At Citrix Consulting, we are often asked for a step-by-step process to seal a vDisk while performing a minor update when the customer uses Citrix PVS (or MCS). Major vDisk update, or vDisk creation require additional steps. There are already many sites and blogs providing such information, and few Citrix KB articles cover some part of it (when […]

----

Shared via my feedly reader


Sent from my iPhone

Chimera: The Many Heads of Crypto-ransomware [feedly]



----
Chimera: The Many Heads of Crypto-ransomware
// A Collection of Bromides on Infrastructure

November has quickly become one of the biggest months for crypto-ransomware all year. Multiple new crypto-ransomware variants have been introduced, as cyber criminals prepare to prey on vulnerable users heading online for their holiday shopping.

The first variant, Chimera, has been encrypting both files and networks drives, as well as threatening to publish personal data and pictures online if the ransom is not paid. Chimera has been in circulation since September, using business-focused emails as its primary avenue of compromise.

According to the Anti-Botnet Advisory Centre:

"Several variants…try to target specific employees within a company and they have one thing in common: within the email, a link points to a source at Dropbox, claiming that additional information has been stored there."

Users naïve enough to click on the link are infected with Chimera, which encrypts all locally stored data and demands a nearly $700 ransom.

Chimera

Currently, there is no evidence that Chimera is following through on its threat to publish the compromised data, but the threat alone is a new modus operandi for crypto-ransomware.

Next up, Cryptowall has been updated to Cryptowall 4.0. Previously, Bromium has chronicled the history of Cryptowall and crypto-ransomware, in its report, "Understanding Crypto-Ransomware." Cryptowall is one of the original crypto-ransomware variants, first appearing around November 2013. In addition to encrypting user files, Cryptowall 4.0 also encrypts file names, making it even more unlikely for file recovery.

Cryptowall

Third, CryptoLocker Service is also an update to one of the original crypto-ransomware variants, CryptoLocker. CryptoLocker Service emerged from the Darknet this week, being run by an individual known as Fakben (known for his participation in stolen credit card forums). Fakben is making CryptoLocker available as a service for $50, plus ten percent.

Fakben notes that this ransomware shares only a name with CryptoLocker, making It clear the new code is different than the original.

Regardless of the variant, crypto-ransomware targets exploits and vulnerabilities in products such as Flash and Java. A recent Bromium survey determined that 90 percent of security professionals believe their organization would be more secure if it disabled Flash.

Finally, Linux servers have been hit by a ransomware attack that gains administrative access and encrypts key files. These attacks should be of little concern to end users since the attacks were against admin servers.

Organizations should be concerned with crypto-ransomware because once an attack succeeds, recovery options are limited to installing from back-ups. Detection and reaction are destined to fail against crypto-ransomware. The only hope for preventing crypto-ransomware attacks is proactive protection, such as the threat isolation provided by Bromium vSentry.



----

Shared via my feedly reader


Sent from my iPhone

Getting Started with Docker Toolbox and Compose [feedly]



----
Getting Started with Docker Toolbox and Compose
// Docker Blog

Today at DockerCon EU 2015, I ran through a demo of running and developing an app from a fresh computer using Docker Toolbox and Compose. This was to show how easy it is for new developers to get started when … Continued
----

Shared via my feedly reader


Sent from my iPhone

Scale Testing Docker Swarm to 30,000 Containers [feedly]



----
Scale Testing Docker Swarm to 30,000 Containers
// Docker Blog

1,000 nodes, 30,000 containers, 1 Swarm manager Swarm is the easiest way to run Docker app in production. It lets you take an an app that you've built in development and deploy it across a cluster of servers. Recently we … Continued
----

Shared via my feedly reader


Sent from my iPhone

A New UI for Docker? Visualizing Container Management with Minecraft [feedly]



----
A New UI for Docker? Visualizing Container Management with Minecraft
// Docker Blog

written by Adrien Duermael, Software Engineer at Docker, Inc. and Gaëtan de Villèle, Software Engineer at Docker, Inc. Since at least 1999, system administrators have been looking for ways to make Ops a more visual, exciting environment. As recently as … Continued
----

Shared via my feedly reader


Sent from my iPhone

Moby’s Cool Hacks from DockerCon EU 2015: Container Migration Tool [feedly]



----
Moby's Cool Hacks from DockerCon EU 2015: Container Migration Tool
// Docker Blog

Back in September, one of the winning teams from Docker Global Hack Day stood out for us. The Container Migration team, drawing inspiration from a DockerCon team that migrated a Quake 3 container around the world, showed migrating a container … Continued
----

Shared via my feedly reader


Sent from my iPhone

DockerCon EU 2015: Catch Up on All of the Day 2 News! [feedly]



----
DockerCon EU 2015: Catch Up on All of the Day 2 News!
// Docker Blog

We cannot contain our excitement about DockerCon EU 2015! With over 1500 attendees, 80 speakers and 60 sponsors, these past few days were packed with great Docker content from the global Docker community – don't worry, we'll post all of the slides and videos … Continued
----

Shared via my feedly reader


Sent from my iPhone

DockerCon EU 2015: Watch the Day 1 General Session [feedly]



----
DockerCon EU 2015: Watch the Day 1 General Session
// Docker Blog

Day 1 of DockerCon EU 2015 was awesome! The day started with an action-packed general session including exciting Docker announcements with live demos (the demo gods were pleased!) and attendees hacking hardware using Docker and littleBits. Watch the video of the … Continued
----

Shared via my feedly reader


Sent from my iPhone

CoreOS Introduces Clair: Open Source Vulnerability Analysis for your Containers [feedly]



----
CoreOS Introduces Clair: Open Source Vulnerability Analysis for your Containers
// CoreOS Blog

Today we are open sourcing a new project called Clair, a tool to monitor the security of your containers. Clair is an API-driven analysis engine that inspects containers layer-by-layer for known security flaws. Using Clair, you can easily build services that provide continuous monitoring for container vulnerabilities. CoreOS believes tools that improve the security of the world's infrastructure should be available for all users and vendors, so we made the project open source. With that same purpose, we welcome your feedback and contributions to the Clair project.

Clair is the foundation of the beta version of Quay Security Scanning, a new feature running now on Quay to examine the millions of containers stored there for security vulnerabilities. Quay users can log in today to see Security Scanning information in their dashboard, including a list of potentially vulnerable containers in their repositories. The Quay Security Scanning beta announcement has more details for Quay users.

Why Create Clair: For Improved Security

Vulnerabilities will always exist in the world of software. Good security practice means being prepared for the mishaps – to identify insecure packages and be prepared to update them quickly. Clair is designed to help you identify insecure packages that may exist in your containers.

Understanding how systems are vulnerable is a laborious task, especially when dealing with heterogenous and dynamic setups. The goal is to empower any developer to gain intelligence about their container infrastructure. Even more, teams are empowered to seek action and apply a fix to vulnerabilities as they arise.

How Clair Works

Clair scans each container layer and provides a notification of vulnerabilities that may be a threat, based on the Common Vulnerabilities and Exposures database (CVE) and similar databases from Red Hat, Ubuntu, and Debian. Since layers can be shared between many containers, introspection is vital to build an inventory of packages and match that against known CVEs.

Automatic detection of vulnerabilities will help increase awareness and best security practices across developer and operations teams, and encourage action to patch and address the vulnerabilities. When new vulnerabilities are announced, all existing layers are rescanned and notifications are sent.

For example, CVE-2014-0160, aka "Heartbleed" has been known for over 18 months, yet Quay Scanning found it is still a potential threat to 80 percent of the Docker images users have stored on Quay. Just like CoreOS Linux contains an auto-update tool which patched Heartbleed at the OS layer, we hope this tool will improve the security of the container layer, and help make CoreOS the most secure place to run containers.

Take note that vulnerabilities often rely on particular conditions in order to be exploited. For example, Heartbleed only matters as a threat if the vulnerable OpenSSL package is installed and being used. Clair isn't suited for that level of analysis and teams should still undertake deeper analysis as required.

Get Started

To learn more, watch this talk presented by Joey Schorr and Quentin Machu about Clair. And, here are the slides from the talk.

This is only the beginning and we expect more and more development. Contributions and support from the community is welcomed – try it out in Quay or enable it in your container environment and let us know what you think.

The team behind Clair will be at DockerCon EU in Barcelona, November 16-17. Please stop by the Quay booth to learn more or see a demo of Clair or Quay Security Scanning.


----

Shared via my feedly reader


Sent from my iPhone

How to Deliver Enterprise DevOps with Intelligent Cloud Foundry Management [feedly]



----
How to Deliver Enterprise DevOps with Intelligent Cloud Foundry Management
// Virtualization Management Software & Data Center Control | VMTurbo » VMTurbo Blog

More and more organizations are moving towards a DevOps focused workflow for the deployment and management of their applications. The reason behind this trend is a desire to become increasingly agile and efficient while enabling the business to keep up … Continue Reading »

The post How to Deliver Enterprise DevOps with Intelligent Cloud Foundry Management appeared first on Virtualization Management Software & Data Center Control | VMTurbo.


----

Shared via my feedly reader


Sent from my iPhone

Review: XenServer 6.5 SP1 Training CXS-300 [feedly]



----
Review: XenServer 6.5 SP1 Training CXS-300
// Latest blog entries

A few weeks ago, I received an invitation to participate in the first new XenServer class to be rolled out in over three years, namely CXS-300: Citrix XenServer 6.5 SP1 Administration. Those of you with good memories may recall that XenServer 6.0, on which the previous course was based, was officially released on September 30, 2011. Being an invited guest in what was to be only the third time the class had been ever held was something that just couldn't be passed up, so I hastily agreed. After all, the evolution of the product since 6.0 has been enormous. Plus, I have been a huge fan of XenServer since first working with version 5.0 back in 2008.  Shortly before the open-sourcing of XenServer in 2013, I still recall the warnings of brash naysayers that XenServer was all but dead. However, things took a very different turn in the summer of 2013 with the open-source release and subsequent major efforts to improve and augment product features. While certain elements were pulled and restored and there was a bit of confusion about changes in the licensing models, things have stabilized and all told, the power and versatility of XenServer with the 6.5 SP1 release is at a level now some thought it would never reach.

FROM 6.0 TO 6.5 – AND BEYOND

XenServer (XS for short) 6.5 SP1 made its debut on May 12, 2015. The feature set and changes are – as always – incorporated within the release notes. There are a number of changes of note that include an improved hotfix application mechanism, a whole new XenCenter layout (since 6.5), increased VM density, more guest OS support, a 64-bit kernel, the return of workload balancing (WLB) and the distributed virtual switch controller (DVSC) appliance, in-memory read caching, and many others. Significant improvements have been made to storage and network I/O performance and overall efficiency. XS 6.5 was also a release that benefited significantly from community participation in the Creedence project and the SP1 update builds upon this.

One notable point is that XenServer has been found to now host more XenDesktop/XenApp (XD/XA) instances than any other hypervisor (see this reference). And, indeed, when XenServer 6.0 was released, a lot of the associated training and testing on it was in conjunction with Provisioning Services (PVS). Some users, however, discovered XenServer long before this as a perfectly viable hypervisor capable of hosting a variety of Linux and Windows virtual machines, without having even given thought to XenDesktop or XenApp hosting. For those who first became familiar with XS in that context, the added course material covering provisioning services had in reality relatively little to do with XenServer functionality as an entity. Some viewed PVS an overly emphasized component of the course and exam. In this new course, I am pleased to say that XS's original roots as a versatile hypervisor is where the emphasis now lies. XD/XA is of course discussed, but the many features available that are fundamental to XS itself is what the course focuses on, and it does that well.

COURSE MATERIALS: WHAT'S INCLUDED

The new "mission" of the course from my perspective is to focus on the core product itself and not only understand its concepts, but to be able to walk away with practical working knowledge. Citrix puts it that the course should be "engaging and immersive". To that effect, the instructor-led course CXS-300 can be taken in a physical classroom or via remote GoToMeeting (I did the latter) and incorporates a lecture presentation, a parallel eCourseware manual plus a student exercise workbook (lab guide) and access to a personal live lab during the entire course. The eCourseware manual serves multiple purposes, providing the means to follow along with the instructor and later enabling an independent review of the presented material. It adds a very nice feature of providing an in-line notepad for each individual topic (hence, there are often many of these on a page) and these can be used for note taking and can be saved and later edited. In fact, a great takeaway of this training is that you are given permanent access to your personalized eCourseware manual, including all your notes.

The course itself is well organized; there are so many components to XenServer that five days works out in my opinion to be about right – partly because often question and answer sessions with the instructor will take up more time than one might guess, and also, in some cases all participants may have already some familiarity with XS or other hypervisor that makes it possible to go into some added depth in some areas. There will always need to be some flexibility depending on the level of students in any particular class.

A very strong point of the course is the set of diagrams and illustrations that are incorporated, some of which are animated. These compliment the written material very well and the visual reinforcement of the subject matter is very beneficial. Below is an example, illustrating a high availability (HA) scenario:

XS6.5SP1_course_image.jpg 

 

The course itself is divided into a number of chapters that cover the whole range of features of XS, enforced by some in-line Q&A examples in the eCourseware manual and with related lab exercises.  Included as part of the course are not only important standard components, such as HA and Xenmotion, but some that require plugins or advanced licenses, such as workload balancing (WLB), the distributed virtual switch controller (DVSC) appliance and in-memory read caching. The immediate hands-on lab exercises in each chapter with the just-discussed topics are a very strong point of the course and the majority of exercises are really well designed to allow putting the material directly to practical use. For those who have already some familiarity with XS and are able to complete the assignments quickly, the lab environment itself offers a great sandbox in which to experiment. Most components can readily be re-created if need be, so one can afford to be somewhat adventurous.

The lab, while relying heavily on the XenCenter GUI for most of the operations, does make a fair amount of use of the command line interface (CLI) for some operations. This is a very good thing for several reasons. First off, one may not always have access to XenCenter and knowing some essential commands is definitely a good thing in such an event. The CLI is also necessary in a few cases where there is no equivalent available in XenCenter. Some CLI commands offer some added parameters or advanced functionality that may again not be available in the management GUI. Furthermore, many operations can benefit from being scripted and this introduction to the CLI is a good starting point. For Windows aficionados, there are even some PowerShell exercises to whet their appetites, plus connecting to an Active Directory server to provide role-based access control (RBAC) is covered.

THE INSTRUCTOR

So far, the materials and content have been the primary points of discussion. However, what truly can make or break a class is the instructor. The class happened to be quite small, and primarily with individuals attending remotely. Attendees were in fact from four different countries in different time zones, making it a very early start for some and very late in the day for others. Roughly half of those participating in the class were not native English speakers, though all had admirable skills in both English and some form of hypervisor administration.  Being all able to keep up a common general pace allowed the class to flow exceptionally well. I was impressed with the overall abilities and astuteness of each and every participant.

The instructor, Jesse Wilson, was first class in many ways. First off, knowing the material and being able to present it well are primary prerequisites. But above and beyond that was his ability to field questions related to the topic at hand and even to go off onto relevant tangential material and be able to juggle all of that and still make sure the class stayed on schedule. Both keeping the flow going and also entertaining enough to hold students' attention are key to holding a successful class. When elements of a topic became more of a debatable issue, he was quick to not only tackle the material in discussion, but to try this out right away in the lab environment to resolve it. The same pertained to demonstrating some themes that could benefit from a live demo as opposed to explaining them just verbally. Another strong point was his adding his own drawings to material to further clarify certain illustrations, where additional examples and explanations were helpful.

SUMMARY

All told, I found the course well structured, very relevant to the product and the working materials to be top notch. The course is attuned to the core product itself and all of its features, so all variations of the product editions are covered.

Positive points:

  • Good breadth of material
  • High-quality eCourseware materials
  • Well-presented illustrations and examples in the class material
  • Q&A incorporated into the eCourseware book
  • Ability to save course notes and permanent access to them
  • Relevant lab exercises matching the presented material
  • Real-life troubleshooting (nothing ever runs perfectly!)
  • Excellent instructor

Desiderata:

  • More "bonus" lab materials for those who want to dive deeper into topics
  • More time spent on networking and storage
  • A more responsive lab environment (which was slow at times)
  • More coverage of more complex storage Xenmotion cases in the lecture and lab

In short, this is a class that fulfills the needs of anyone from just learning about XenServer to even experienced administrators who want to dive more deeply into some of the additional features and differences that have been introduced in this latest XS 6.5 SP1 release. CXS-300: Citrix XenServer 6.5 SP1 Administration represents a makeover in every sense of the word, and I would say the end result is truly admirable.


Read More
----

Shared via my feedly reader


Sent from my iPhone

CloudStack European User Group roundup – November 2015 [feedly]



----
CloudStack European User Group roundup – November 2015
// CloudStack Consultancy & CloudStack...

An intimate feel at yesterday's European User Group only seemed to encourage discussion, and despite slightly lower numbers than usual we managed to run a little late! Hosted by our friends at Trend Micro here in London, we started with lunch, and once everyone had eaten it was down to business.

Giles Sirett (chairman of the user group) sent his apologies, as he was unable to assume his usual meetup duties. Hosting was therefore down to Paul Angus and me (Steve Roles). I started by welcoming everyone to the group, briefly running through the agenda and talking about the recent CloudStack Collaboration Conference in Dublin. All the videos of all the fantastic talks from that event can be found here: https://www.youtube.com/playlist?list=PLGeM09tlguZSeNyOyQKJHNX4pxgK-yoTA. It's looking like next year's conference will be held in Sao Paulo, Brazil (dates TBC) but we're already looking forward to another great event. Paul then took us through the CloudStack news, including new features in 4.6 (currently in the voting stages). For more details take a look through our slides:

Time for our first speaker of the day – René Moser of Swiss TXT joined us from Switzerland, to talk about Ansible and CloudStack. Rene started with some general use cases for Ansible before going into some detail on how Swiss TXT use it with CloudStack. He then gave us an overview and brief history of Ansible. Ansible has 21 CloudStack modules, all integration tested and included in v2.0, and to show this integration René was brave enough to give us live demos. René's slides are here:

Next up was Daan Hoogland of Leaseweb, who made the trip over from Amsterdam. Daan talked about testing, and the need for more of it! The need for the community to run almost continuous functional testing against CloudStack, by using 'mini clouds' or nested environments. The importance of integration into customer environments, and the benefits of using Jenkins were discussed and provoked lively discussion in the room. For more information Daan's slides are here:

After a coffee break, it was the turn of our host – Jon Noble of Trend Micro to give us his talk on securing a cloud environment, which probably provoked the most discussion and questions, continuing in the pub well after the meeting had finished! Jon spoke about the changing landscape of security, and how more traditional security measures were increasingly incapable of protecting cloudy workloads and VMs. He then referenced the recent, high profile attacks (Talk Talk and Sony), and talked about the various tools that may have been used, and how they could have been purchased on the dark web. Jon's slides are here and they're well worth a look:

Last up was our very own Paul Angus, with a talk on CloudStack Networking. Starting with physical (networking), Paul talked about why you would separate networks, converged networking, labelling, mappings and some of the mistakes it is easy to make. Touching on storage and guest networks, Paul moved onto the pros and cons of isolated and shared networks. Paul finished by talking about an exciting new feature that ShapeBlue are currently working on – OSPF and routed VPC. There is much, MUCH more detail in Paul's slides:

Following Paul's talk and questions, we enjoyed a few beers in a Paddington bar where the discussions continued. Our next European CloudStack User Group will be in London in March 2016 – if you're interested in CloudStack and want to get involved – come to the meetups, join the CloudStack European User Group on LinkedIn, and join the conversation on the mailing lists https://cloudstack.apache.org/mailing-lists.html.

Thanks to Trend Micro for hosting our meetup, and thanks to Rene, Daan, Jon and Paul for giving us their time and expertise to prepare talks.


----

Shared via my feedly reader


Sent from my iPhone

ShapeBlue enables the University of Sao Paulo to deliver federated cloud services [feedly]



----
ShapeBlue enables the University of Sao Paulo to deliver federated cloud services
// CloudStack Consultancy & CloudStack...

The University of São Paulo (USP) is Latin America's largest university and Brazil's most prestigious academic institution. It produces more doctorate degrees annually than any other university in the world and ranks fifth in the number of scientific articles published. It is one of the world's leading research institutions, with 100,000 students, 6,000 professors and 17,000 employees spread over 11 campuses.

In 2012, USP successfully built a private cloud environment (Cloud USP) that would help them overcome the challenges of having 150 disparate IT environments and the ever increasing demands for compute and storage in their dynamic research environment.

In 2014, USP saw the opportunity to create a federated "cloud of clouds" across all of Brazil's leading academic institutions. The ability of these institutions to share computing resources would drive collaboration and shared research across those organisations.

Needed a platform that could rapidly scale

The first iteration of Cloud USP consolidated the University's 150 datacentres down to 6 and brought all of its corporate, educational and research environments together into a single private cloud environment with user self-service and pooling of resources. This initial project was hugely successful and allowed the university to cut physical storage footprint by 90% (despite data growth in excess of 300%) and greatly increase their IT operational efficiency.

Cloud USP has been so successful that it has resulted in a need for the platform to rapidly scale over the next few years as Cyrano Rizzo, Office of IT at USP explained. "Cloud USP is already a massive environment, but demand from our departments' mean that we needed to scale the compute infrastructure by 300% over the coming year. We therefore had to plan the future technology for the platform carefully and make sure that we chose something that was both proven, scalable and gave us the agility we needed to cope with future requirements."

Moving to an open source platform

USP had used a vendor distribution of Apache CloudStack, but that presented challenges both in terms of the required features and cost-at-scale for the upcoming project. Whereas, moving to an open source platform would enable USP to quickly develop any new functionality they require. It would also provide the added advantage of allowing the University to be directly involved with the core development and maintenance of the technology. This would ultimately free them from commercially driven roadmaps and allow them to focus on what they really need for their rapidly evolving environment.

Apache CloudStack was the natural choice to meet these needs, as it was the basis for their existing vendor technology. This would mean that USP would not have undergo a steep learning curve in order to migrate as they were already actively participating in the Apache CloudStack community.

Cyrano explained, "Moving to open source is not primarily about cost for us. It is about our ability to directly contribute the features we need to the platform. CloudStack, as an Apache project, is a very mature and well governed open source community. We like very much that it is a project driven by its users and not by software vendors. A vendor driven development approach simply does not give us the agility we need."

With the new open source infrastructure now in place, USP have been able to identify the benefits of the change, as Cyrano explained. "We are now manufacturer independent which is a major plus as we have the autonomy we want to be able to freely customise and improve the software."

New features to allow cloud federation

The most important new features developed by ShapeBlue enhanced the security model of CloudStack to allow USP to provide a seamless user experience for users across different institutions. This means that users can provision infrastructure resources in the cloud environments of any participating organisation. As these features have been contributed into the open source Apache CloudStack project, it also allows new institutions to easily join this federation.

"I believe that the federation will be a great thing," said Cyrano. "Now it's possible to interact between autonomous private cloud computing environments without the need to authenticate again. It also enables collaborative research between different academic institutes while maintaining the researcher identity in their own university. This means the researcher needs just one identity to authenticate in any cloud environment of the universities that participate in the RNP CAFe. In the future, we will be able to transfer virtual resources such as instances, templates, disks etc from one cloud to another and integrate networks."

Other supporting features were developed such as integrating a PaaS offering (based on open source Tsuru), the networking functionality of CloudStack was also extended for the University and a mechanism to track the financial usage of their cloud infrastructure developed.

Working with ShapeBlue

In order to realise their vision, USP decided that they needed a partner who could provide professional expertise and experience. After a public tender process, ShapeBlue were chosen.
Cyrano said. "We selected ShapeBlue to work with us on this project because we were extremely impressed with their knowledge, professionalism and their widespread experience of implementing environments like ours. There is no other company who have."

ShapeBlue consultants were able to carry out the migration to Apache CloudStack in 3 months but were also heavily involved in a number of other important tasks as Marco Sinhoreli, Managing Consultant of ShapeBlue Brazil, explained. "We helped develop an additional feature set for the USP cloud environment. As well as greatly increasing the scale of the environment.

USP and ShapeBlue have been working on many customisations and improvements. For example, routing VPC to enable the capability of routing the tiers end-to-end as well as quota services that permits limit tenants by currency, all of them open source and authorised by Apache CloudStack.

Commenting on working with ShapeBlue, Cyrano concluded. "They are a great partner and, like me, are passionate about Apache CloudStack. I cannot stress enough the importance of the open source community and the key role that ShapeBlue play in that. They have such in depth knowledge of the product coupled with the experience of working with it in the real world. Without that we would not have been able to upgrade, what is one the largest private cloud environments in the world, so quickly.

"I would highly recommend ShapeBlue to anyone working on Apache CloudStack. Their reputation within the community is second to none and they are doing some amazing things in the cloud computing sector."


----

Shared via my feedly reader


Sent from my iPhone

Recovery of VMs to new CloudStack instance [feedly]



----
Recovery of VMs to new CloudStack instance
// CloudStack Consultancy & CloudStack...

We recently came across an issue where a CloudStack instance was beyond recovery, but the backend XenServer hypervisors were quite happily running user VMs and Virtual Routers. As building a new CloudStack instance was the only option the problem was now how to recover all the user VMs to such a state they can be imported into the new CloudStack instance.

Mapping out user VMs and VHD disks

The first challenge is to map out all the VMs and work out which VMs belong to which CloudStack account, and which VHD disks belong to which VMs. To do this first of all recover the original CloudStack database and then query the vm_instance, service_offering, account and domain  tables.

In short we are interested in:

  • VM instance ID and names
  • VM instance owner account ID and account name
  • VM instance owner domain ID and domain name
  • VM service offering, which determines the VM CPU / memory spec
  • VM volume ID, name, size, path, type (root disk or data) and state for all VM disks – root or data.

At the same time we are not interested in:

  • System VMs
  • User VMs in state "Expunging", "Expunged", "Destroyed" or "Error". The "Error" state would indicate the VM was not healthy on the original infrastructure.
  • VM disk volumes which are in state "Expunged" or "Expunging". Both of these would indicate the VM was in the process of being deleted on the original CloudStack instance.

From a SQL point of view we do this as follows:

SELECT   cloud.vm_instance.id as vmid,   cloud.vm_instance.name as vmname,   cloud.vm_instance.instance_name as vminstname,   cloud.vm_instance.display_name as vmdispname,   cloud.vm_instance.account_id as vmacctid,   cloud.account.account_name as vmacctname,   cloud.vm_instance.domain_id as vmdomainid,   cloud.domain.name as vmdomname,   cloud.vm_instance.service_offering_id as vmofferingid,   cloud.service_offering.speed as vmspeed,   cloud.service_offering.ram_size as vmmem,   cloud.volumes.id as volid,   cloud.volumes.name as volname,   cloud.volumes.size as volsize,   cloud.volumes.path as volpath,   cloud.volumes.volume_type as volpath,   cloud.volumes.state as volstate  FROM cloud.vm_instance   right join cloud.service_offering on (cloud.vm_instance.service_offering_id=cloud.service_offering.id)  right join cloud.volumes on (cloud.vm_instance.id=cloud.volumes.instance_id)  right join cloud.account on (cloud.vm_instance.account_id=cloud.account.id)  right join cloud.domain on (cloud.vm_instance.domain_id=cloud.domain.id)  where    cloud.vm_instance.type='User'    and not (cloud.vm_instance.state='Expunging' or cloud.vm_instance.state='Destroyed' or cloud.vm_instance.state='Error')   and not (cloud.volumes.state='Expunged' or cloud.volumes.state='Expunging')   order by cloud.vm_instance.id;

This will return a list of VMs and disks like the following:

vmidvmnamevminstnamevmdispnamevmacctidvmacctnamevmdomainidvmdomnamevmofferingidvmspeedvmmemvolidvolnamevolsizevolpathvolpathvolstate
24rootvm1i-2-24-VMrootvm12admin1ROOT150051230ROOT-242147483648034c8b964-4ecb-4463-9535-40afc0bd2117ROOTReady
25ppvm2i-5-25-VMppvm25peterparker2SPDM Inc150051231ROOT-252147483648010c12a4f-7bf6-45c4-9a4e-c1806c5dd54aROOTReady
26ppvm3i-5-26-VMppvm35peterparker2SPDM Inc150051232ROOT-26214748364807046409c-f1c2-49db-ad33-b2ba03a1c257ROOTReady
26ppvm3i-5-26-VMppvm35peterparker2SPDM Inc150051236ppdatavol5368709120b9a51f4a-3eb4-4d17-a36d-5359333a5d71DATADISKReady

This now gives us all the information required to import the VM into the right account once the VHD disk file has been recovered from the original primary storage pool.

Recovering VHD files

Using the information from the database query above we now know that e.g. the VM "ppvm3":

  • Is owned by the account "peterparker" in domain "SPDM Inc".
  • Used to have 1 vCPU @ 500MHz and 500MB vRAM.
  • Had two disks:
    • A root disk with ID 7046409c-f1c2-49db-ad33-b2ba03a1c257.
    • A data disk with ID b9a51f4a-3eb4-4d17-a36d-5359333a5d71.

If we now check the original primary storage repository we can see these disks:

-rw-r--r-- 1 root root  502782464 Nov 17 12:08 7046409c-f1c2-49db-ad33-b2ba03a1c257.vhd  -rw-r--r-- 1 root root      13312 Nov 17 11:49 b9a51f4a-3eb4-4d17-a36d-5359333a5d71.vhd  

This should in theory make recovery easy. Unfortunately due to the nature of XenServer VHD disk chains it's not that straight forward. If we tried to import the root VHD disks as a template this would succeed, but as soon as we try to spin up a new VM from this template we get the "insufficient resources" error from CloudStack. If we trace this back in the CloudStack management log or XenServer SMlog we will most likely find an error along the lines of "Got exception SR_BACKEND_FAILURE_65 ; Failed to load VDI". The root cause of this is we have imported a VHD differencing disk, or in common terms a delta or "child" disk. These reference a parent VHD disk – which we so far have not recovered.

To fully recover healthy VHD images we have two options:

  1. If we have access to the original storage repository from a running XenServer we can use the "xe" command line tools to export each VDI image. This method is preferable as it involves less copy operations and less manual work.
  2. If we have no access from a running XenServer we can copy the disk images and use the "vhd-util" utility to merge files.

Recovery using XenServer

VHD file export using the built in XenServer tools is relatively straight forward. The "xe vdi-export" tool can be used to export and merge the disk in a single operation. The first step in the process is to map an external storage repository to the XenServer (normally the same repository which is used for upload of the VHD images to CloudStack later on), e.g. an external NFS share.

We now use the vdi-export option as follows:

# xe vdi-export uuid=7046409c-f1c2-49db-ad33-b2ba03a1c257 format=vhd filename=ppvm3root.vhd --progress  [|] ######################################################> (100% ETA 00:00:00)  Total time: 00:01:12  # xe vdi-export uuid=b9a51f4a-3eb4-4d17-a36d-5359333a5d71  format=vhd filename=ppvm3data.vhd --progress  [\] ######################################################> (100% ETA 00:00:00)   Total time: 00:00:00  # ll  total 43788816  -rw------- 1 root root       12800 Nov 18  2015 ppvm3data.vhd  -rw------- 1 root root  1890038784 Nov 18  2015 ppvm3root.vhd  

If we now utilise vhd-util to scan the disks we see they are both dynamic disks with no parent:

# vhd-util read -p -n ppvm3root.vhd   VHD Footer Summary:  -------------------  Cookie              : conectix  Features            : (0x00000002) <RESV>  File format version : Major: 1, Minor: 0  Data offset         : 512  Timestamp           : Sat Jan  1 00:00:00 2000  Creator Application : 'caml'  Creator version     : Major: 0, Minor: 1  Creator OS          : Unknown!  Original disk size  : 20480 MB (21474836480 Bytes)  Current disk size   : 20480 MB (21474836480 Bytes)  Geometry            : Cyl: 41610, Hds: 16, Sctrs: 63                      : = 20479 MB (21474754560 Bytes)  Disk type           : Dynamic hard disk  Checksum            : 0xffffefb4|0xffffefb4 (Good!)  UUID                : 4fc66aa3-ad5e-44e6-a4e2-b7e90ae9c192  Saved state         : No  Hidden              : 0    VHD Header Summary:  -------------------  Cookie              : cxsparse  Data offset (unusd) : 18446744073709  Table offset        : 2048  Header version      : 0x00010000  Max BAT size        : 10240  Block size          : 2097152 (2 MB)  Parent name         :   Parent UUID         : 00000000-0000-0000-0000-000000000000  Parent timestamp    : Sat Jan  1 00:00:00 2000  Checksum            : 0xfffff44d|0xfffff44d (Good!)    # vhd-util read -p -n ppvm3data.vhd   VHD Footer Summary:  -------------------  Cookie              : conectix  Features            : (0x00000002) <RESV>  File format version : Major: 1, Minor: 0  Data offset         : 512  Timestamp           : Sat Jan  1 00:00:00 2000  Creator Application : 'caml'  Creator version     : Major: 0, Minor: 1  Creator OS          : Unknown!  Original disk size  : 5120 MB (5368709120 Bytes)  Current disk size   : 5120 MB (5368709120 Bytes)  Geometry            : Cyl: 10402, Hds: 16, Sctrs: 63                      : = 5119 MB (5368430592 Bytes)  Disk type           : Dynamic hard disk  Checksum            : 0xfffff16b|0xfffff16b (Good!)  UUID                : 03fd60a4-d9d9-44a0-ab5d-3508d0731db7  Saved state         : No  Hidden              : 0    VHD Header Summary:  -------------------  Cookie              : cxsparse  Data offset (unusd) : 18446744073709  Table offset        : 2048  Header version      : 0x00010000  Max BAT size        : 2560  Block size          : 2097152 (2 MB)  Parent name         :   Parent UUID         : 00000000-0000-0000-0000-000000000000  Parent timestamp    : Sat Jan  1 00:00:00 2000  Checksum            : 0xfffff46b|0xfffff46b (Good!)    # vhd-util scan -f -m'*.vhd' -p  vhd=ppvm3data.vhd capacity=5368709120 size=12800 hidden=0 parent=none  vhd=ppvm3root.vhd capacity=21474836480 size=1890038784 hidden=0 parent=none  

These files are now ready for upload to the new CloudStack instance.

Note: using the Xen API it is also in theory possible to download / upload a VDI image straight from XenServer, using the "export_raw_vdi" API call. This can be achieved using a URL like:

https://<account>:<password>@<XenServer IP or hostname>/export_raw_vdi?vdi=<VDI UUID>&format=vhd

At the moment this method unfortunately doesn't download the VHD file as a sparse disk image, hence the VHD image is downloaded in it's full original disk size, which makes this very space hungry method. It is also a relatively new addition to the Xen API and is marked as experimental. More information can be found on http://xapi-project.github.io/xen-api/snapshots.html.

Recovery using vhd-util

If all we have access to is the original XenServer storage repository we can utilise the "vhd-util" binary which can be downloaded from http://download.cloud.com.s3.amazonaws.com/tools/vhd-util (note this is a slightly different version to the one built in to XenServer).

If we run this with the "read" option we can find out more information about what kind of disk this is and if it has a parent. For the root disk this results in the following information:

# vhd-util read -p -n 7046409c-f1c2-49db-ad33-b2ba03a1c257.vhd   VHD Footer Summary:  -------------------  Cookie              : conectix  Features            : (0x00000002) &amp;amp;lt;RESV&amp;amp;gt;  File format version : Major: 1, Minor: 0  Data offset         : 512  Timestamp           : Sun Nov 15 23:06:44 2015  Creator Application : 'tap'  Creator version     : Major: 1, Minor: 3  Creator OS          : Unknown!  Original disk size  : 20480 MB (21474836480 Bytes)  Current disk size   : 20480 MB (21474836480 Bytes)  Geometry            : Cyl: 41610, Hds: 16, Sctrs: 63                      : = 20479 MB (21474754560 Bytes)  Disk type           : Differencing hard disk  Checksum            : 0xffffefe6|0xffffefe6 (Good!)  UUID                : 2a2cb4fb-1945-4bad-9682-6ea059e64598  Saved state         : No  Hidden              : 0    VHD Header Summary:  -------------------  Cookie              : cxsparse  Data offset (unusd) : 18446744073709  Table offset        : 1536  Header version      : 0x00010000  Max BAT size        : 10240  Block size          : 2097152 (2 MB)  Parent name         : cfe206b7-cacb-4291-a1c0-f248ff53ed4b.vhd  Parent UUID         : f6f05652-20fa-4f5f-9784-d41734489b32  Parent timestamp    : Fri Nov 13 12:16:48 2015  Checksum            : 0xffffd82b|0xffffd82b (Good!)  

From the above we notice two things about the root disk:

  • Disk type : Differencing hard disk
  • Parent name : cfe206b7-cacb-4291-a1c0-f248ff53ed4b.vhd

I.e. the VM root disk is a delta disk which relies on parent VHD disk cfe206b7-cacb-4291-a1c0-f248ff53ed4b.vhd.

If we run this against the data disk the story is slightly different:

# vhd-util read -p -n b9a51f4a-3eb4-4d17-a36d-5359333a5d71.vhd   VHD Footer Summary:  -------------------  Cookie              : conectix  Features            : (0x00000002) &amp;amp;lt;RESV&amp;amp;gt;  File format version : Major: 1, Minor: 0  Data offset         : 512  Timestamp           : Mon Nov 16 01:52:51 2015  Creator Application : 'tap'  Creator version     : Major: 1, Minor: 3  Creator OS          : Unknown!  Original disk size  : 5120 MB (5368709120 Bytes)  Current disk size   : 5120 MB (5368709120 Bytes)  Geometry            : Cyl: 10402, Hds: 16, Sctrs: 63                      : = 5119 MB (5368430592 Bytes)  Disk type           : Dynamic hard disk  Checksum            : 0xfffff158|0xfffff158 (Good!)  UUID                : 7013e511-b839-4504-ba88-269b2c97394e  Saved state         : No  Hidden              : 0    VHD Header Summary:  -------------------  Cookie              : cxsparse  Data offset (unusd) : 18446744073709  Table offset        : 1536  Header version      : 0x00010000  Max BAT size        : 2560  Block size          : 2097152 (2 MB)  Parent name         :   Parent UUID         : 00000000-0000-0000-0000-000000000000  Parent timestamp    : Sat Jan  1 00:00:00 2000  Checksum            : 0xfffff46d|0xfffff46d (Good!)  

In other words the data disk is showing up with:

  • Disk type : Dynamic hard disk
  • Parent name : <blank>

This behaviour is typical for VHD disk chains. The root disk is created from an original template file, hence it has a parent disk, whilst the data disk was just created as a raw storage disk, hence has no parent.

Before moving forward with the recovery it is very important to make copies of both the differencing disks and parent disk to a separate location for further processing. 

The full recovery of the VM instance root disk relies on the differencing disk being coalesced or merged into the parent disk – but since the parent disk was a template disk it is used by a number of differencing disks the coalesce process will change this parent disk and render any other differencing disks unrecoverable.

Once we have copied the root VHD disk and it's parent disk to a separate location we use the vhd-util "scan" option to verify we have all disks in the disk chain. The"scan" option will show an indented list of disks which gives a tree like view of disks and parent disks.

Please note if the original VM had a number of snapshots there might be more than two disks in the chain. If so use the process above to identify all the differencing disks and download them to the same folder.

# vhd-util scan -f -m'*.vhd' -p   vhd=cfe206b7-cacb-4291-a1c0-f248ff53ed4b.vhd capacity=21474836480 size=1758786048 hidden=1 parent=none     vhd=7046409c-f1c2-49db-ad33-b2ba03a1c257.vhd capacity=21474836480 size=507038208 hidden=0 parent=cfe206b7-cacb-4291-a1c0-f248ff53ed4b.vhd

Once all differencing disks have been copied we can now use the vhd-util "coalesce" option to merge the child difference disk(s) into the parent disk:

# ls -l  -rw-r--r-- 1 root root  507038208 Nov 17 12:45 7046409c-f1c2-49db-ad33-b2ba03a1c257.vhd  -rw-r--r-- 1 root root 1758786048 Nov 17 12:47 cfe206b7-cacb-4291-a1c0-f248ff53ed4b.vhd  # vhd-util coalesce -n 7046409c-f1c2-49db-ad33-b2ba03a1c257.vhd   # ls -l  -rw-r--r-- 1 root root  507038208 Nov 17 12:45 7046409c-f1c2-49db-ad33-b2ba03a1c257.vhd  -rw-r--r-- 1 root root 1863848448 Nov 17 13:36 cfe206b7-cacb-4291-a1c0-f248ff53ed4b.vhd  

Note the vhd-util coalesce option has no output. Also note the size change of the parent disk cfe206b7-cacb-4291-a1c0-f248ff53ed4b.vhd.

Now the root disk has been merged it can be uploaded as a template to CloudStack to allow build of the original VM.

Import of VM into new CloudStack instance

We now have all details for the original VM:

  • Owner account details
  • VM virtual hardware specification
  • Merged root disk
  • Data disk

The import process is now relatively straight forward. For each VM:

  1. Ensure the account is created.
  2. In the context of the account (either via GUI or API):
    1. Import the root disk as a new template.
    2. Import the data disk as a new volume.
  3. Create a new instance from the uploaded template.
  4. Once the new VM instance is online attach the uploaded data disk to the VM.

In a larger CloudStack estate the above process is obviously both time consuming and resource intensive, but can to a certain degree be automated. As long as the VHD files were healthy to start off with it will however allow for successful recovery of XenServer based VMs between CloudStack instances.

About The Author

Dag Sonstebo is  a Cloud Architect at ShapeBlue, The Cloud Specialists. Dag spends most of his time designing and implementing IaaS solutions based on on Apache CloudStack.

 


----

Shared via my feedly reader


Sent from my iPhone

How We Built Delivery With Delivery [feedly]



----
How We Built Delivery With Delivery
// Chef Software

In February of 2014 Chef formed an engineering team to build what became Chef Delivery--a system that accelerates the adoption of continuous delivery (CD) and DevOps practices. In this webinar, Seth Falcon, General Manager of Continuous Delivery at Chef, will tell the story of how that team changed as Chef Delivery evolved from an early prototype to the product launched on November 3rd. We will describe how we learned to practice CD while building a CD system, and talk about the impact that working with Chef Delivery has had on us.
----

Shared via my feedly reader


Sent from my iPhone

Chef Delivery: A Guided Tour [feedly]



----
Chef Delivery: A Guided Tour
// Chef Blog

Chef Delivery accelerates the adoption of continuous delivery and encourages DevOps collaboration. It provides a proven, reproducible workflow for managing changes as they progress from a developer's workstation, through a series of automated tests, and out into production.

In this recorded webinar (presented on November 17, 2015), Nathen Harvey and Michael Ducy introduce you to Chef Delivery. They show you how to submit changes to Delivery and how to use the UI to track them as they move through the different stages of the pipeline. They talk a bit about the kinds of tests you can run, show you our GitHub integration, and give you an overview of how to control what happens in the pipeline with build cookbooks.

Q&A from the live webinar, including questions we didn't have time to answer live, can be viewed below.

Q&A From Live Webinar

Does Chef Delivery integrate with SVN?

No, we do not have integration with SVN planned at this time. We are focused on providing an excellent experience for GitHub and Stash/Bitbucket users first, then will evaluate the possibility of integrating with other SCM systems.

If one attribute value of a policy (or role) is changed, do you still recommend Approve / Deliver phase?

Yes. All code changes should flow through the same workflow. Even what seems like the simplest of changes can lead to an outage.

Can you name your Delivery status to something that works well for your environment like (sandbox–>dev–>stage–>production)?

The stages in Delivery cannot be renamed, and may in fact be orthogonal to your existing development and testing environments. A useful way to think about this is that the Acceptance stage should provide you with the confidence you need to ship a change all the way through to production, or whatever your Delivered target may be.

If we already have a Chef Premium subscription, where do we download Chef Delivery?

Please contact your Chef account representative for instructions.

Will hosted Chef include Delivery? Or will Delivery always be for on premises Chef?

There are no plans to add Delivery to Hosted Chef at this time, but we will watch for demand and revisit this decision periodically.

Can you provision or deploy to VMware like you would to Amazon?

Absolutely. One of our customers did this using vRealize during their initial evaluation of Delivery.

I noticed that rehearsal started right after union successfully completed. Can you insert a manual approval step in between?

No, the approval steps are not configurable. The Rehearsal stage exists solely as a way to validate that fixes made in response to breaks in Union are valid.

You said it will be available for Stash, but does that mean Git only? Or will Mercurial be supported?

The Stash/Bitbucket integration will be for the Git workflow (i.e. PR-based).

Is Delivery designed to replace other CI/CD platforms like Bamboo or TeamCity?

Delivery is desinged to reinforce the proven workflow that we know to be successful at many of our customers. It reflects the tooling and workflow practices that we believe lead to great outcomes and allow you to move safely at high velocity.

Can Chef Delivery be implemented in-house ?

Yes, it is currently available only for on-premises installation. There is information about how to set up Delivery at https://docs.chef.io/delivery.html

Do you have a timeline when it will work with Stash? Will it technically work with any git repo even if it is not Github?

The Stash/Bitbucket integration is planned for completion in Q4 2015.

It's not clear to me if the merge activity requires human intervention using this pipeline. How is the interaction between the automation of the tool and the human decisions / revisions…?

Humans interact with the system in three places. First, a human will create and submit a change into the pipeline. Second, another human will review that code with an eye toward whether or not the code was written correctly (was it built correctly?). Third, a human will decide if what was built is ready to deliver (was the correct thing built?)

Which stage/task runs kitchen tests in your example?

Test Kitchen was not executed during our demo but should be run on the local development workstation and may also be run during one of the phases.

Can you do a demo that shows if something breaks somewhere in the process, fix it and rerun? Does it rerun end-to-end again or just continue where it bums out?

Each change that moves through the pipeline goes through the complete set of stages and phases, so the process does not resume where the break occurred but starts from the beginning again.

Can a single application infrastructure item be promoted into different provisioners. (i.e. AWS and Vcenter?)

Yes. You can drive any automated provisioning technology from within a pipeline. As such, it is possible to provision infrastructure in different clouds using different APIs all within a single provision phase of the pipeline.

What does the pricing structure look like for Chef Delivery? Is this a flat fee, monthly, based on application quantity, etc?

See https://www.chef.io/pricing/ for standard pricing of the Chef Premium subscription, which includes Delivery. Delivery can also be purchased as a stand-alone product with licenses per server and per user; please contact your Chef account rep for details.

Is it integrated with Stash?

Stash/Bitbucket integration work is now in progress and is planned for completion in Q4 2015.

Are all the tests handled by Delivery as well?

The recipes you write in the project's Build Cookbook dictate what happens in each Phase. For example, if you are using Cucumber for executing automated functional tests, you can tell Delivery to launch Cucumber and run the tests in the Functional Phase of the pipeline.

Is there an open source freebie version of Chef Delivery to play with?

There are two open source components (delivery cli and delivery truck) available for use on GitHub. https://github.com/chef/delivery-cli and https://github.com/chef-cookbooks/delivery-truck

Is it integrated with GitHub Enterprise yet?

Yes, this integration is complete.

What are my integration options between Delivery and Jenkins?

You can trigger Jenkins jobs via a Delivery pipeline from a phase job.

Can applications be delivered without being tied to a cookbook?

Yes. All projects have a build cookbook that tells Delivery what to do during each phase. The projects themselves do not need to be cookbook based nor have deployment controlled via cookbooks.

Is active directory integration available?

Delivery supports basic authentication via AD user names/passwords. We are looking to do more work on LDAP integration in the coming quarters.

Does Delivery intregrate well with both on-premise (vm and physical), and cloud technologies (AWS, Azure, Google)?

Delivery itself is an on-premises solution and can be hosted in the cloud, in your datacenter, or a mix of both (it is common to have build nodes living across clouds).

You mention that the appproval and deliver phases are manual (human) approval phases. Is it required that this be manual or can this be automated based on some metrics? % of tests passed, etc.

These approval gates are currently manual, but interest has been expressed by some customers in defining business rules to automate these steps. We will evaluate whether a product change in this area makes sense.

Why have the build phase after someone reviews the change? Would you not want getting feedback on the build before reviewing?

The Build stage happens after approval (review). Approval triggers the system to merge the change to the target branch and begin the Build stage. The purpose of this ordering is to drive continuous integration, increase pipeline velocity, and make sure that the resources spent on doing QA on build are not wasted on an artifact that could not be released or is not desired (not approved).


----

Shared via my feedly reader


Sent from my iPhone

Chef Server 12.3.1 Release Announcement [feedly]



----
Chef Server 12.3.1 Release Announcement
// Chef Blog

Ohai Chefs,

We're pleased to announce that today we've released Chef Server 12.3.1. This is a small patch release that addresses severely degraded performance for nodes fetching cookbooks when all cookbooks have the same version number. Additionally, Chef Server now applies the full set of API-level tests to nightly builds of Chef Zero to ensure that the two APIs stay in sync.

The release can be downloaded at https://downloads.chef.io/chef-server.


----

Shared via my feedly reader


Sent from my iPhone

Friday, November 20, 2015

Static Analysis: Improving the quality and consistency of your cookbooks [feedly]

Static Analysis: Improving the quality and consistency of your cookbooks
https://www.chef.io/blog/2015/11/20/static-analysis-improving-the-quality-and-consistency-of-your-cookbooks/

-- via my feedly.com reader 

Every time we make changes to our cookbooks we are introducing risk. We can stop making changes to reduce the risk OR we can adopt new practices, like linting and testing, to help us manage that risk.

Linting tools provide automated ways to ensure that the code we write adheres to conventions that ensure code uniformity, portability, and uses best practices. This ensures everyone on the team writes similarly structured source code. It helps weave the expectations into the development of the code, and encourages collaboration over time. Ensuring the uniformity of source code helps set the expectations for fellow project contributors.

In this recorded webinar (presented on November 4, 2015) we focus on using Foodcritic and Rubocop (@ 21:45 in the recording) – two linting tools packaged in the Chef Development Kit (ChefDK) that you can immediately start using to reduce the risk in the cookbooks you develop. Q&A from the live webinar, including questions we didn't have time to answer live, can be viewed below.

  • Introduction to Foodcritic (@ 10:00 in the recording)
  • Introduction to Rubocop (@ 21:45 in the recording)
  • Demonstration of the Tools (@ 30:45 in the recording)

 

Is it easy to integrate these code analysis tools into Jenkins for pass/fail during the delivery pipeline?

It is definitely easy to integrate Foodcritic and Rubocop with Jenkins or any continuous integration (CI) tools within your organization. It is also strongly encouraged. In the past I would simply add a new build step, executed after the latest version of the code had been syncronized, to execute both Foodcritic and Rubocop from the command-line as I would run locally.

The benefit of running these tools on a central system is that it ensures that the code that everyone on the team writes adheres to the policy that all of you define. Without this central system it may not be executed by each individual team member or the results may vary on individual machines.

We currently use both Foodcritic and Rubocop lint cops as warnings and we plan to actually fail the build soon on some of them while excluding the legacy code we would probably not change. What do you think will be the best way to exclude them? A .rubocop.yml file and a .foodcritic inside the cookbook?

Yes. Each cookbook should have both of these files configured with the rules that you wanted enabled and the rules that you want disabled.

Should / Can you use Foodcritic / Rubocop to lint data bags?

Data Bags are stored as JSON on the local file system. When you attempt to use knife to upload those data bags to the Chef Server, the knife tool will perform some validation on that data for correctness.

However, you may not be immediately be using knife to upload the updated data bag content to the Chef Server. Your workflow may have you 1) make the changes to the JSON 2) commit those changes to a source code repository 3) push those changes to a central code repository 4) allow a Continuous Integration (CI) environment to read in those changes and upload them to the Chef Server. In this environment it would be incredibly useful to have a tool to validate the JSON on your local machine before submitting to a CI system. Here are a few tools that I quickly found that are able to validate JSON from the command-line:

  • https://github.com/zaach/jsonlint
  • https://stedolan.github.io/jq

Remember, it is also possible to use Ruby or any other programming language that has JSON support. Write a script that loads all the JSON files to ensure that they simply load correctly. Here is an example of doing that in Ruby.

Have you used the Atom Text Editor linter packages for Foodcritic and Rubocop? Do you find them useful?

I recently made the change over to using Atom as my full-time editor and these tools were one of the number one reasons. Atom supports plugins for both of these tools.

  • https://atom.io/packages/rubocop-auto-correct
  • https://atom.io/packages/linter-foodcritic

In the past, when doing some serious development, I would have both of these tools running in the background monitoring a file on every save with the a Ruby tool named Guard.

  • https://github.com/guard/guard-rubocop
  • https://github.com/guard/guard-shell

Similar to the Atom Text Editor lint packages questions, have you used RubyMine with chef and chef linting?

I am big fan of the JetBrains tools. In the past I have used ReSharper to accelerate my workflow when working with C#. RubyMine is the most sophisticated Integrated Development Environment (IDE) for Ruby with some incredible features.

Does Chef conduct training that's based on Test Kitchen and these linting tools?

In the new Chef Essentials training we demonstrate the use of Test Kitchen.

What is your process to get the .rubocop rules that everyone agrees on out to the multiple cookbooks?

My usual process involves defining that first .rubocop file with one cookbook. I try to target the biggest cookbook with the most recipes, helpers, tests, etc. I execute Rubocop against each file, fix the changes the team agrees with and then add the rules I want to disable to the configuration file.

How I distribute the completed file to each cookbook depends on if I am using a single chef repository for all my cookbooks or if I am using a repository per cookbook.

With the single all, encompassing repository you can actually store the .rubocop file in the root of the chef repository. Rubocop will look for a configuration file in the current directory and then move up directories until it finds one.

Now for the individual repositories per cookbook you could do the same. Store the file in a parent directory and distribute to each of the team members through a separate code repository. Though, I often times simply add this file to every cookbook that we maintain.

One way to get around having to copy this configuration file every single time you create a cookbook is to use the generator tools present in the Chef Development Kit (ChefDK). You can setup a template to automatically generate a cookbook for you that contains all the common configuration files you may desire.

Does Foodcritic support things like enforcing particular copyright text blurbs or other company specific conventions? Or is there a better tool to investigate for that kind of functionality?

Foodcritic does not by default. However, Foodcritic does provide the ability for you generate your own rules. A number of examples are present on the website.

Rubocop provides a Domain Specific Language (DSL) that allows you to define a rule, give it a code, a name, tags, what files it examines, and what it does when it examines each line of code. It does require some Ruby programming skills.

What was the terminal prompt used during the Webinar?

On my Mac I use iTerm2 with the Agnoster Theme with Powerline Patched Fonts.

  • https://www.iterm2.com
  • https://github.com/robbyrussell/oh-my-zsh
  • https://gist.github.com/agnoster/3712874
  • https://github.com/powerline/fonts
  


Chef Analytics 1.2.0 Release [feedly]

Chef Analytics 1.2.0 Release
https://www.chef.io/blog/2015/11/19/chef-analytics-1-2-0-release/

-- via my feedly.com reader

Ohai Chefs,

We are pleased to announce that Chef Analytics 1.2.0 is now available and features not only a new and improved look, feel, and user experience, but also improved node management abilities.

The biggest changes in this release are the new Node Detail page and the ability to purge node data. In the new Node Detail page, you can now explore your node run history, filter nodes based on status, and more accurately search your node list.

In addition, now you have more control over your disk space in Postgres with our new command to destructively remove node ohai data.

As always, you can download new releases of Analytics from downloads.chef.io.

Thank you for using Chef!


Guest Post: Chef Extension for Visual Studio Code [feedly]

Guest Post: Chef Extension for Visual Studio Code
https://www.chef.io/blog/2015/11/19/guest-post-chef-extension-for-visual-studio-code/

-- via my feedly.com reader 

Stuart Preston is an incredibly valued member of the Chef community and has been a key contributor in making an awesome experience using Chef along with the Microsoft product line. Stuart's most recent addition to the Chef ecosystem is an extension for Visual Studio Code for developing your Chef code. The extension adds the ability to continually lint your code using Rubocop and autocomplete many of the resources found in core Chef.

Note: This post originally appeared on stuartpreston.net

Announcing: Chef Extension for Visual Studio Code

At Microsoft's Connect() conference today, Visual Studio Code now supports extensions and has been open-sourced!

Over the past couple of weeks I have been working hard behind the scenes to build a Chef Extension for Visual Studio Code to make cookbook development an awesome experience on Windows, OS X and Linux.

The extension currently supports:

Syntax/keyword highlighting – Visual Studio Code will now recognise and colourise Chef DSL (including Chef Provisioning DSL). You can select from a number of different colour themes also available on the Marketplace.

Rubocop linting – The quickest way to identify issues in your code is to use a linting tool like Rubocop to perform static code analysis as you go. The Chef extension is preconfigured to automatically run Rubocop against your whole repo every time you save.

Snippet support – The extension includes autocomplete code snippets for all resources built into the Chef Client.

Here's a demo of some of the new capabilities:

If you are working with Chef and you haven't used Visual Studio Code before, I recommend you try it out. Downloads are available for Windows, OS X and Linux from https://code.visualstudio.com. Once you have installed it you can download the Chef Extension directly from within the application's Command Palette.

At Pendrica we are committed to open source, and making more awesome for the Chef community. We believe that Infrastructure developers using Chef should have the same rich feature set and extended user experience as our counterparts in application development. We hope you enjoy using the extension which is open-sourced on our GitHub page at https://github.com/pendrica/vscode-chef

Happy Cheffing!