Sunday, June 3, 2018

Xen Orchestra 5.20



----
Xen Orchestra 5.20
// Xen Orchestra

Xen Orchestra 5.20

Our monthly release is here, with a bunch of new features and bug fixes. Time to update and discover what's new!

XCP-ng updates

Xen Orchestra is now able to update your XCP-ng hosts directly from the UI! This is a major improvement in XCP-ng usability, without sacrificing our promise to move closer to the upstream: we still rely on yum like any other CentOS, but we can now also do that from XO!

Xen Orchestra 5.20

Read our blog post to learn how it works.

UI improvements

Better usage reports

Usage reports now filter useless objects and more clearly display the evolution of each resource (host CPU, RAM, etc.)

Xen Orchestra 5.20

List useless VM snapshots

When you remove a backup job, the snapshots associated aren't removed. To avoid a long hunt of those useless snapshots, we are now able to detect them automatically in the "Health view", so you can remove them in one click!

HA advanced options

You can now configure the XCP-ng/XenServer HA behavior for each VM: "restart", "restart if possible" and "disable". For more details, please read the documentation on HA.

Show control domain in VDI list

We now display the Control Domain if a VDI is attached to it. This is helpful to understand what's happening on your storage.

Set a remote syslog host

You can now set a remote syslog host directly from the UI:

Xen Orchestra 5.20

This feature is really useful when you want to centralize all your hosts logs somewhere, for various reasons:

  • compliance
  • security
  • log analyze/parsing
  • and more!

Backup

Backup concurrency

A new option is available: you can decide to have a max concurrency of a backup process per VM. For example, you could decide to have maximum of 10 VMs backup in parallel. By default, there is no limit ("0" in the text field) BUT we still enforce maximum values. Please read our blog post regarding backup concurrency in Xen Orchestra.

Xen Orchestra 5.20

Improved backup logs

A lot more details in the backup logs! Now, each step is individually visible with its duration, current status etc.

Xen Orchestra 5.20

Now, you can know in real time which steps failed (snapshot, transfer, merge) and how long which one succeeded.

Improved backup reports

Same story for backup reports, it's far more detailed:

Xen Orchestra 5.20

Retry a single failed VM backup

In a job log view, if a VM failed to be backup, you can retry it now. Very handy when it was for a specific reason, like protecting the VDI chain or one failure within hundreds of succeeded VMs: no need to restart the whole job!


----

Read in my feedly


Sent from my iPhone

XCP-ng updates from Xen Orchestra



----
XCP-ng updates from Xen Orchestra
// Xen Orchestra

XCP-ng updates from Xen Orchestra

You may already know: we decided to have a better update mechanism in XCP-ng (vs XenServer). We are using the RedHat way with yum.

But it would have been interesting to also be able to update directly from an UI, or to avoid updating each host one by one.

Guess what? That's why we developped a XAPI (XenServer/XCP-ng API) plugin! You'll see it's now very easy to keep all your hosts up-to-date, but also have more transparency on what's going on!

This feature is coming in XO 5.20 available in 2 days!

Install the plugin

The plugin will be included in the next XCP-ng release. Until this, you'll have to install it, with a simple yum install xcp-ng-updater command. Yes, that's it!

Updates via Xen Orchestra

Here, I'm taking as an example a freshly installed XCP-ng 7.4.1. Since our release, some patches are available.

First, just go into the Home/Pool view in Xen Orchestra, and you'll see that some pools need updates:

XCP-ng updates from Xen Orchestra

"XCP Pool" needs to be updated (it's not XenServer but XCP-ng based): clearly, there is 6 packages in that case. If you click on the pool, you can go into the dedicated view and "Patches" tab:

XCP-ng updates from Xen Orchestra

By clicking on "Install pool patches", all your hosts will be updated automatically! But wait, there is more? You want to know what's need to be updated? Click on the host, and go into the "Patches" view:

XCP-ng updates from Xen Orchestra

And you can even see the changelog (here, the only changelog available is for microcode_ctl):

XCP-ng updates from Xen Orchestra

Anyway, as soon your patches/udaptes are installed, we'll display that the updated hosts need to be rebooted:

XCP-ng updates from Xen Orchestra

Hovering on it:

XCP-ng updates from Xen Orchestra

Same in the host view, you can't miss it:

XCP-ng updates from Xen Orchestra

Note: It's up to you to do it anytime you like (always start by the the pool master by the way). Update won't reboot anything automatically. You are the admin, you know when it's the best time to do so.

If you don't have the plugin installed, you'll see a message explaining how to do so:

XCP-ng updates from Xen Orchestra

In the end, updating XCP-ng is very simple, and still following the "upstream" way: using yum and a repository. This is the proof that's possible to achieve!


----

Read in my feedly


Sent from my iPhone

XenServer 7.5



----
XenServer 7.5
// Xen Orchestra

XenServer 7.5

One release per quarter, this is the CR cycle of Citrix. And this is the latest one available: XenServer 7.5!

What's new?

The official changelog is here, but here is a simple recap.

Increased pool size

You can use pools up to 64 hosts! That's a really good news, it means you can use XOSAN for the same pricing, with a LOT more hosts/storage!

USB passthrough

You can pass now a physical USB device to a VM, as if it was plugged directly to it. This could cause some security questions, but it would be probably handy for various use cases.

Note: this feature is only available in XenServere Enterprise Edition or wait for the coming free XCP-ng 7.5 release!

Networking SR-IOV

This is still experimental, but should allow to use ultra-fast networking by having the VM directly connected to the NIC. Less overhead too.

To enable it: xe-enable-experimental-feature network_sriov

Note: this feature is only available in XenServere Enterprise Edition or wait for the coming free XCP-ng 7.5 release!

Thin pro for block storage

This is really exciting! At least, this feature is probably the big missing piece for years… Good news: you can have thin pro on iSCSI/HBA, with GFS2 technology.

Bad news: you need to create a new SR (you can't "convert" an existing iSCSI SR to thin pro), nor migrate a VDI from this SR to another type (see "Bonus" section.

Note: this feature is only available in XenServere Enterprise Edition or wait for the coming free XCP-ng 7.5 release!

Bonus: qcow2 is here

For the new thin pro on block storage (GFS2), you can create VDI that are 16TiB big, thanks to the use of qcow2 format. This will trigger some questions (you won't be able to export those in VHD format anymore, because 2TiB limit is within VHD format itself), so you won't be able to migrate those 2TiB+ disks to any other SR type than GFS2.

XCP-ng 7.5 status

As soon we validated the update process, we'll provide a 7.5 release for XCP-ng, as usual with 2 choices: upgrade from the ISO or just use yum update (or via XO UI, see that we included update directly from Xen Orchestra!).

Stay tuned on XCP-ng website, into the news section! (or subscribe to the newsletter from there)


----

Read in my feedly


Sent from my iPhone

Introducing the Netgate Forum



----
Introducing the Netgate Forum
// Netgate Blog

The pfSense forum has been migrated to a combined forum for all Netgate products at https://forum.netgate.com. This forum is powered by NodeBB.


----

Read in my feedly


Sent from my iPhone

How to deploy templates without using secondary storage on KVM



----
How to deploy templates without using secondary storage on KVM
// CloudStack Consultancy & CloudStack...

Introduction

ShapeBlue will introduce a new feature in 4.11.1 that will allow users to bypass secondary storage with KVM. The feature introduces a new way to use templates and ISOs, allowing administrators to use them without being cached on secondary storage. The usual process of virtual machine deployment will stay as before, the only constraint being that the task of downloading the template or ISO will be delegated to the KVM agents instead of the SSVM agent.

Overview

This feature adds a new field in the vm_template table which is called 'direct_download'. The field will determine if template needs to be downloaded by SSVM (in case of '0'), or directly on the host when deploying the VM (in case of '1'). CloudStack administrators will have the option to set this field through the UI or API call as described in the following examples:

From the UI:

From Cloudmonkey:

register template zoneid=3e80c1e6-0710-4018-9062-194d6b3bab97 ostypeid=6f232c75-5370-11e8-afb9-06354a01076f hypervisor=KVM url=http://dl.openvm.eu/cloudstack/macchinina/x86_64/macchinina-kvm.qcow2.bz2 format=QCOW2 displaytext=TestMachina name=TestMachina directdownload=true

The same feature applies to ISOs as well – they don't need to be cached on secondary storage but can be directly downloaded by the host. CloudStack admins have this option available on the API call when registering ISOs and through the UI form as well.

Whenever a VM deployment is started the template will be downloaded on primary storage. The feature actually checks if the template/ISO has been already downloaded on the pool, checking template_spool_ref table. If there's an entry on the table matching its pool ID and the template ID, then it won't be downloaded again. The same action applies if the running VM requires the template again (eg. when reinstalling ). Please note that due to the direct download nature of this feature, the uniqueness of the templates across primary storage pools is the responsibility of the CloudStack operator. CloudStack itself can't detect if the files in a template download URL have changed or not.

Metalinks are also supported for this feature, and administrators can be more flexible in terms of managing their templates as they can set priorities and location preferences in the metalink file. Metalinks are effectively xml that provides URLs for downloading files. The duplicate download locations provide reliability in case one method fails. Some clients also achieve faster download speeds by allowing different chunks/segments of each file to be downloaded from multiple resources at the same time. Please see the following example:

As the example shows, CloudStack administrators can set location preference and priority, which will be considered upon VM deployment. The deployment logic itself introduces a retry mechanism in 2 cases of failures: VM deployment failure and template download failure.

VM deployment retry logic: this will initiate the deployment on a suitable host and will try to deploy it (which includes the template download itself). If the deployment fails for some reason it will retry the deployment on another suitable host.

Template download retry logic: this is part of the VM deployment and will try to download the template/iso directly by the host. If it fails for some reason (e.g. URL not available) it will iterate through the provided list of priority and location. Once download is completed it will execute the checksum validation (if provided), if that one fails it will download it again, until it has made three attempts. If all three attempts unsuccessful it will return a deployment failure and go back to VM Deployment logic.

Please see the following simplified picture of the deployment logic:

Since the download task has been delegated to the KVM agent instead of SSVM, this feature will be available only for KVM templates.

About the author

Boris Stoyanov is Software Engineer in testing at ShapeBlue, The Cloud Specialists. Bobby spends his time testing features for the Apache CloudStack Community and for our ShapeBlue clients.

The post How to deploy templates without using secondary storage on KVM appeared first on The CloudStack Company.


----

Read in my feedly


Sent from my iPhone

Working towards CloudStack zero downtime upgrades



----
Working towards CloudStack zero downtime upgrades
// CloudStack Consultancy & CloudStack...

As most people know, Apache CloudStack has gained a reputation as a solid, low maintenance dependable cloud orchestration platform. That's why in last year's Gartner Magic Quadrant so many leaders and challengers were organisations underpinning their services with Apache CloudStack. However, version upgrades – whilst being much simpler than many competing technologies – have always been the pain point for CloudStack operators. The irony is that upgrading CloudStack itself is usually relatively painless, but upgrading its distributed networking Virtual Routers often results in network downtime for users for a number of minutes, requiring user maintenance windows.

At ShapeBlue we have a vision that CloudStack based clouds – whatever their size and complexity – should be able to be upgraded with zero downtime. No maintenance windows, no service interruptions: zero downtime. Achieving this will allow all CloudStack users/operators to benefit from the vast array of new functionality being added by the CloudStack community in every release.

We set out on the journey towards zero downtime a number of months ago and have been working hard with the CloudStack community on the first steps (it is important to note that "we" includes many people in the CloudStack community who have contributed to this work). Below, I set out the detail of what we've achieved so far and what we hope to be achieving in the future, but if readers just want the headline: CloudStack 4.11.1 has up to an 80%+ reduction in network downtime during upgrades compared to CloudStack 4.9.3, and downtime is near eliminated when using redundant VRs.

What's the problem when upgrading?

During upgrades, CloudStack's virtual routers (VRs) have to be restarted and usually destroyed and recreated (this also sometimes has to be done during day-to-day operations, but is most apparent during upgrades). These restarts usually lead to downtime for users – in some cases up to several minutes. Whilst Redundant Virtual Routers can mitigate against this they do have some limitations with regards to backward compatibility and are therefore not always a solution to the problem.

Downtime reductions in CloudStack 4.11.1

With the changes made in 4.11.1 (described below) we have managed to achieve significant reductions in network downtime during VR restarts. Please note these improvements will vary from one environment to another, and will be dependent on hypervisor type, hypervisor version, storage backend as well as network bandwidth, so we suggest testing in your own environment to determine the benefits. We've tested with a typical VR configuration in our virtualised lab environment.

The testing setup used is as follows:

  • CloudStack 4.9.3 and 4.11.1 environments built in parallel. To maintain the same hypervisor versions across both tests the following hypervisor versions were used:
    • VMware vSphere 5.5
    • KVM on CentOS7
    • XenServer 7.0
  • Environment configuration: In each test we build a simple isolated network with:
    • 10 VMs
    • 10 IP addresses
    • Firewall rules configured on all IP addresses

Downtime was measured as follows:

  • For egress traffic we measured the total amount of time an outbound ping would fail from a hosted VM during the restart process.
  • For ingress traffic we assumed a hosted service on a CloudStack VM and measured the amount of time SSH would be unavailable for during the restart process.
  • In all tests we carried out a "restart network with cleanup" operation and measured the above times. Note – with the new parallel VR restart process (see below) we no longer care how long the overall process takes – we are only interested in how long the network is impacted for. As a result we've simply measured the sum of time services were unavailable for (note this time may in some cases be a sum of multiple downtime periods).
  • Tests were repeated multiple times and average number of seconds calculated for ingress and egress downtime across tests for each hypervisor. To illustrate our best case scenarios we've also included the shortest measured downtime figure.

Results are as follows:

EnvironmentACS 4.9.3 avgACS 4.11.1 avg (lowest)Reduction avg (highest)
VMware 5.5119s21s (12s)
82% (90%)
KVM / CentOS744s26s (9s)40% (80%)
XenServer 7.0181s33s (15s)82% (92%)

How these results were achieved

Existing improvements made in CloudStack 4.11

A number of changes were made in CloudStack 4.11 designed to improve VR restart performance:

  • The system VM has been upgraded from Debian 7 (init based) to Debian 9 (systemd based)
  • The patching process and boot times have been improved, and we have also eliminated reboots after patching
  • The system VM disk size has been reduced, leading to faster deployment time.
  • The VPN backend in the VR has been upgraded to Strongswan, which provides improved VPN performance
  • The redundant VR (RVR) mechanisms have been improved.
  • Code base has been refactored, and it is now easier to build and maintain VR code
  • A number of stability improvements made

 Changes  in CloudStack 4.11.1 – Parallel VR restarts

CloudStack 4.11.1 will ship with a new feature: Parallel VR Restarts, which  changes the behaviour of the "restart network with cleanup" option. In previous CloudStack versions this method would be a serial action where the original VR would be stopped and destroyed and then a new VR started. In CloudStack 4.11.1 this has now been changed to a parallel process where a "restart with cleanup" means:

  • A new VR is started in the background while the old one is still running and providing networking services.
  • Once the new VR is up and has checked in to CloudStack management the old VR is simply stopped and destroyed.
  • This is followed by a last configuration step where ARP caches at neighbours are updated.

With this method there is no negotiation between old and new VR, CloudStack simply orchestrates the parallel startup of the new VR. As a result this method does not have any pre-requisites around the version of the original VR – meaning it can be used for VR restarts after upgrade from considerable older CloudStack versions to 4.11.1.

It is worth noting that this 4.11.1 feature does not make large reductions in the actual  VR processing time itself – however with the parallel startup this doesn't affect network downtime, and the downtime itself is more connected to the final handover of network processing from old to new VR.

In addition to the considerable reduction in normal VR restart downtime, this feature also introduces a much improved redundant VR restart – this comes close to eliminating network downtime when redundant VR networks are restarted, but does obviously mean the old and new VRs need to be version compatible. In our own testing we have seen downtime for redundant VR networks near eliminated.

Coming in future versions

Advanced  parallel restarts

The next step on the journey is to add further handshaking between old and new VR:

  • New VR will be started in parallel to old, but with some network services and / or network interfaces disabled.
  • Once new VR is up CloudStack management will do an external handover from old VR to new, i.e. handle VR connectivity via the hypervisor.

Fully negotiated redundant VR restarts

The last step on the journey will be aiming towards a fully redundant handover from old to new VR:

  • In this final step the end goal is to make all VRs redundant capable, which will reduce same version restart times as well as future upgrade restart times.
  • New VR will again be started in parallel to old, but will be configured with the redundancy options currently used in the RVR.
  • Once new VR is up the old and new VRs will internally negotiate the handover of all networking connectivity and services, before the old VR is shut down.

– – –

During this journey there are a number of tasks needing carried out – both to make the VR internal processing more efficient as well as improving the backend network restart mechanisms:

  • General speedup of IPtables rules application
  • Fix and improvement of the DNS / DHCP configuration to eliminate repetition of processing steps and cut down on processing time
  • Further improvements of redundant Virtual Router: VRRP2 configuration, and/ or move to a different VR HA solution
  • A move to make all VRs redundant capable by default
  • Move from python2 to python3
  • Consider a move from IPtables to NFtables
  • Converge and make network topologies flexible, refactor and merge VPC and non-VPC code base

Conclusion

With the changes implemented in 4.11.1 we have already made a huge step forward in reducing network downtime as part of VR restarts – whether this is during day to day operation or as part of a CloudStack version upgrade. With downtime reduced by up to 80% and average figures of less than 30 seconds this is a considerable improvement – and this is only the first step on the journey.

We have not yet achieved our goal of "zero downtime upgrades" but it is worth considering that the network interruptions that CloudStack can now achieve during an upgrade will be less than the timeouts for many applications and protocols.

In the coming CloudStack versions we hope to continue this development and further reduce the figures, working towards the ultimate goal of "zero downtime upgrades". 

About The Author

Dag Sonstebo is a Cloud Architect at ShapeBlue, The Cloud Specialists. Dag spends his time designing, implementing and automating IaaS solutions based around Apache CloudStack.

 

The post Working towards CloudStack zero downtime upgrades appeared first on The CloudStack Company.


----

Read in my feedly


Sent from my iPhone

You asked, we delivered. Introducing XenServer 7.5



----
You asked, we delivered. Introducing XenServer 7.5
// Citrix Blogs

I'm still recovering from Citrix Synergy. It was an amazing event — especially having the time to speak with customers and partners. I had the opportunity to present in a breakout session (check it out below!) with David Cottingham about …

  

Related Stories


----

Read in my feedly


Sent from my iPhone

Bring Out the Best in Citrix Application and Desktop Delivery with XenServer!



----
Bring Out the Best in Citrix Application and Desktop Delivery with XenServer!
// Citrix Blogs

Upgrade Your XenServer Virtual Environment Now and Bring Out the Best in Citrix Application and Desktop Delivery

Hello everyone!

Did you make it to Citrix Synergy this year? If so, I hope you enjoyed it as much as I did! …

  

Related Stories


----

Read in my feedly


Sent from my iPhone

End-of-Life Announcement for Chef Reporting, Enterprise Chef Server 11, and Chef Analytics



----
End-of-Life Announcement for Chef Reporting, Enterprise Chef Server 11, and Chef Analytics
// Chef Blog

Last week at ChefConf 2018, we announced the general availability (GA) of Chef Automate 2. Chef Automate 2 is a continuous automation platform that provides operational visibility into your managed fleet, with visual tools such as a query language, trend graph and event timeline to help you narrow down and correct errors. Chef Automate has been on the market since July 2016 and now includes all of the capabilities found in some of our older products which are already marked as deprecated.

Therefore, today we are announcing the end-of-life date of Chef Reporting, Chef Analytics, and Enterprise Chef 11 as December 31, 2018.

What does end-of-life mean?

On the end-of-life date, all development on the affected products will terminate. No new versions will be subsequently released including for any security vulnerabilities or bug fixes.

What are the replacement products? What should I do if I am still using these products?

Customers still using Chef Reporting and Chef Analytics should plan to migrate to Chef Automate 2. Chef Automate 2, released in May 2018, is our modern platform for continuous automation and is the culmination of a nine-month re-architecture initiative to improve performance, scale and responsiveness.

Enterprise Chef Server 11 users should upgrade to the Chef Server 12 series, which is the replacement product version. For information on how to upgrade from Enterprise Chef Server 11 to Chef Server 12, please consult the product documentation. Chef Server 12 is compatible with all currently-supported versions of Chef Client.

Who can I contact to discuss my upgrade and support options?

Please contact your Chef customer success manager or account manager to review your options around this end-of-life announcement.

The post End-of-Life Announcement for Chef Reporting, Enterprise Chef Server 11, and Chef Analytics appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Chef Open Source Community News – May 2018



----
Chef Open Source Community News – May 2018
// Chef Blog

Here are this month's updates from the Chef, Habitat, and InSpec open-source communities. Despite the fact that we at Chef Software were busy preparing for ChefConf – which had its own raft of product announcements, including the release of Chef Automate 2 – we also managed to accomplish a lot in the open-source world.

Chef

Due to some last-minute regressions, it took a bit of time for us to release ChefDK 3, but it finally arrived on May 21st. Of note in this release are the inclusion of Chef 14 and InSpec 2, thus necessitating the major version number bump. Many other changes are included, such as newer versions of Test Kitchen, ChefSpec, Foodcritic, Cookstyle, and Berkshelf. You can read about these and many of the other improvements in ChefDK here.

We released Chef Client 14.1.12 on May 16th, which fixes a few regressions in Chef 14. Of particular note is that if you are running Chef in FIPS 140-2 environments, this is the version of Chef 14 you want to use, as we needed to correct some incompatibilities between Ohai and FIPS 140-2 mode. Also, Ubuntu 18.04 LTS is now supported, and we said goodbye to macOS 10.10 which is now end-of-life.

Finally, we released Chef Client 13.9 for those of you who are still on Chef 13. In addition to the usual set of bugfixes, we have backported many of the custom resource improvements in Chef 14 to Chef 13.

Habitat

There were many Habitat announcements at ChefConf 2018: the Habitat exporter for Helm and Azure Kubernetes Service, the Open Service Broker reference implementation, and the general availability of Habitat on-premise. The team has also been preparing for a major release of the Habitat supervisor to be released shortly: you should read their post on what breaking changes will be in the forthcoming Habitat 0.56.0.

InSpec

The team released InSpec twice, mostly comprising bug fixes, but if you are looking for InSpec to check that your AWS S3 buckets are encrypted, you will want to upgrade to InSpec version 2.1.67 or greater. All of the InSpec projects including Train (the transport interface), InSpec experimental plugins, and InSpec itself now live under the InSpec GitHub organization.

At ChefConf we also announced InSpec for Google Cloud Platform (GCP) support in beta, with similar functionality to the AWS and Azure features already in core InSpec. If you use GCP, we would love if you check out the GitHub repository. Just like how InSpec AWS and InSpec Azure were developed, we will merge GCP support back into InSpec core when it is ready.

Finally, we released, into incubation, InSpec-Iggy, an experimental InSpec plugin that allows you to generate InSpec controls from a Hashicorp Terraform state file.

Miscellaneous

Supermarket 3.1.68 is out, with a few small enhancements including bumping the version of Ruby to 2.5.1.

Chef Software is now maintaining three new Windows-related cookbooks, chocolatey_config, chocolatey_source, and windows_firewall, with an eye to refactoring them for inclusion into core Chef. If you have opinions about how these resources should be improved, head on over to the respective GitHub repositories.

The Sous Chefs team has been busy during the month of May, taking ownership of several popular cookbooks such as java and haproxy. They've also completed a major refactor of the PostgreSQL cookbook to make it custom resource-oriented. Finally, the Sublime and Atom editor plugins for Chef are now Sous Chefs projects as well.

Finally, if you are looking for the notes from the community summit held last week as part of ChefConf 2018 in Chicago, they can be found here on GitHub.

Community award

In light of the fact that we recognized three awesome community Chefs just last week at ChefConf, we are skipping the separate monthly award as part of this post. But please join us again in congratulating Dan Webb, Romain Sertelon, Edmund Haselwanter, Tim Smith, and Joshua Timberman for their incredible contributions to the open-source community.

The post Chef Open Source Community News – May 2018 appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Microsoft has reportedly acquired GitHub



----
Microsoft has reportedly acquired GitHub
// The Verge

Microsoft has reportedly acquired GitHub, and could announce the deal as early as Monday. Bloomberg reports that the software giant has agreed to acquire GitHub, and that the company chose Microsoft partly because of CEO Satya Nadella. Business Insider first reported that Microsoft had been in talks with GitHub recently.

GitHub is a vast code repository that has become popular with developers and companies hosting their projects, documentation, and code. Apple, Amazon, Google, and many other big tech companies use GitHub. Microsoft is the top contributor to the site, and has more than 1,000 employees actively pushing code to repositories on GitHub. Microsoft even hosts its own original Windows File Manager source code on GitHub. The service was last valued at $2 billion back in 2015, but it's not clear exactly how much Microsoft has paid to acquire GitHub.

Microsoft has been rapidly investing in open source technology since Satya Nadella took over the CEO role. Microsoft has open sourced PowerShell, Visual Studio Code, and the Microsoft Edge JavaScript engine. Microsoft also partnered with Canonical to bring Ubuntu to Windows 10, and acquired Xamarin to assist with mobile app development.

Microsoft is also using the open source Git version control system for Windows development, and the company even brought SQL Server to Linux. Microsoft's Visual Studio Code, which lets developers build and debug web and cloud applications, has soared in popularity with developers. Microsoft's GitHub acquisition will likely mean we'll start to see even closer integration between Microsoft's developer tools and the service. At Build last month, Microsoft continued its close work with GitHub by integrating the service into the company's App Center for developers.

There will likely be questions around Microsoft's GitHub acquisition, especially among some open source advocates who are wary of Microsoft's involvement. If Microsoft does indeed announce this acquisition on Monday then developers won't have too long to wait to get a better idea of Microsoft's GitHub plans.


----

Read in my feedly


Sent from my iPhone

Thursday, May 24, 2018

Automatically Generating InSpec Controls from Terraform



----
Automatically Generating InSpec Controls from Terraform
// Chef Blog

InSpec-Iggy, or "Iggy" for short, is a new plugin for InSpec that generates InSpec compliance profiles from Terraform .tfstate files (and eventually AWS CloudFormation and Azure Resource Manager templates). Iggy was originally inspired by Christoph Hartmann's inspec-verify-provision repository and the associated blog post on testing Terraform with InSpec. With the release of InSpec 2.0 and the addition of AWS and Azure support, automatically generating controls became much more feasible. Let's see a quick demo of how it works:

inspec terraform generate

This currently generates a set of InSpec Controls based on mapping Terraform to InSpec Resources. The output may be captured as a file (ie. "test.rb") and used from the command line with InSpec. The demo uses the Terraform Basic Two-Tier AWS Architecture and the following commands:

terraform apply  inspec terraform generate > test.rb  inspec exec test.rb -t aws://us-west-1

With the current versions of InSpec-Iggy (0.2.0) and InSpec (2.1.83) we get the following output:

$ inspec exec test.rb -t aws://us-west-1    Profile: tests from test.rb (tests from test.rb)  Version: (not specified)  Target:  aws://us-west-1      ✔  aws_ec2_instance::i-0ed224373e440f72b: Iggy terraform.tfstate aws_ec2_instance::i-0ed224373e440f72b       ✔  EC2 Instance i-0ed224373e440f72b should exist       ✔  EC2 Instance i-0ed224373e440f72b id should cmp == "i-0ed224373e440f72b"       ✔  EC2 Instance i-0ed224373e440f72b instance_type should cmp == "t2.micro"       ✔  EC2 Instance i-0ed224373e440f72b key_name should cmp == "mattray-tf"       ✔  EC2 Instance i-0ed224373e440f72b subnet_id should cmp == "subnet-fbc7f29c"    ✔  aws_security_group::sg-7770ba0f: Iggy terraform.tfstate aws_security_group::sg-7770ba0f       ✔  EC2 Security Group sg-7770ba0f should exist       ✔  EC2 Security Group sg-7770ba0f description should cmp == "Used in the terraform"       ✔  EC2 Security Group sg-7770ba0f vpc_id should cmp == "vpc-0eacdb69"    ✔  aws_security_group::sg-0a70ba72: Iggy terraform.tfstate aws_security_group::sg-0a70ba72       ✔  EC2 Security Group sg-0a70ba72 should exist       ✔  EC2 Security Group sg-0a70ba72 description should cmp == "Used in the terraform"       ✔  EC2 Security Group sg-0a70ba72 vpc_id should cmp == "vpc-0eacdb69"    ✔  aws_subnet::subnet-fbc7f29c: Iggy terraform.tfstate aws_subnet::subnet-fbc7f29c       ✔  VPC Subnet subnet-fbc7f29c should exist       ✔  VPC Subnet subnet-fbc7f29c availability_zone should cmp == "us-west-1a"       ✔  VPC Subnet subnet-fbc7f29c cidr_block should cmp == "10.0.1.0/24"       ✔  VPC Subnet subnet-fbc7f29c vpc_id should cmp == "vpc-0eacdb69"    ✔  aws_vpc::vpc-0eacdb69: Iggy terraform.tfstate aws_vpc::vpc-0eacdb69       ✔  VPC vpc-0eacdb69 should exist       ✔  VPC vpc-0eacdb69 cidr_block should cmp == "10.0.0.0/16"       ✔  VPC vpc-0eacdb69 dhcp_options_id should cmp == "dopt-d76783b2"       ✔  VPC vpc-0eacdb69 instance_tenancy should cmp == "default"    Profile Summary: 5 successful controls, 0 control failures, 0 controls skipped  Test Summary: 19 successful, 0 failures, 0 skipped  

inspec terraform extract

This currently reads the terraform.tfstate file and looks for tagged Resources and extracts commands for executing them against the machines. This is still under development, but the current demo provides the following:

$ inspec terraform extract -t terraform.tfstate  inspec exec https://github.com/dev-sec/apache-baseline -t ssh://54.183.205.70 -i mattray-tf  inspec exec https://github.com/dev-sec/linux-baseline -t ssh://54.183.205.70 -i mattray-tf  inspec exec https://github.com/mattray/hong-kong-compliance -t aws://us-west-2  

which needs a small bit of tweaking but it works

inspec exec https://github.com/dev-sec/apache-baseline -t ssh://ubuntu@54.183.205.70 -i mattray-tf  ...  Profile Summary: 5 successful controls, 8 control failures, 1 control skipped  Test Summary: 103 successful, 14 failures, 1 skipped  

Working with InSpec-Iggy

InSpec-Iggy is available through Rubygems, so you gem install inspec-iggy to get started now. If you want to get involved in development, there are further instructions on GitHub.

Writing InSpec Plugins

Writing InSpec plugins is not yet a documented feature, so I've written an example InSpec plugin and pushed it to Rubygems and GitHub if you would like to learn more.

The Future of Iggy

Chef has been working with a leading international banking group to automate cloud compliance for Singapore and Hong Kong. We've been gathering requirements and use-cases for integration of InSpec and Terraform and we welcome your feedback too. InSpec-Iggy is open source and Apache-licensed. Iggy is not yet 1.0, we want to build out stronger support for more Terraform resources and build a better inspec terraform extract experience. AWS CloudFormation is also under active development and Azure Resource Manager templates will follow a similar pattern. We look forward to your input, testing, and patches as we work to expand the InSpec coverage of all of your infrastructure and resources.

The post Automatically Generating InSpec Controls from Terraform appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

InSpec now available in Azure Cloud Shell



----
InSpec now available in Azure Cloud Shell
// Chef Blog

This week brings us into another delightful ChefConf! We've made a lot of great announcements about enhancements and features that have been added across our suite of automation tools, and in Chef Automate itself with Automate 2.0. We also announced that Chef Software is now even more tightly integrated with the Microsoft Azure platform. Users can now run InSpec natively as a part of the Azure Cloud Shell experience. This allows everyone using Cloud Shell to easily run InSpec compliance scans right from their browser!

Azure Cloud Shell allows you to connect to Azure using an authenticated, browser-based shell experience that's hosted in the cloud and accessible from virtually anywhere. Azure Cloud Shell is assigned per unique user account and automatically authenticated with each session. You get a modern command-line experience from multiple access points, including the Azure portal, shell.azure.com, Azure mobile app, Azure docs (e.g., Azure CLI 2.0), and the VS Code Azure Account extension.

Using InSpec in Azure Cloud Shell is super easy! Just call inspec from the bash prompt, and you're on your way!

InSpec is able to leverage Azure Managed Service Identity system that's baked into Cloud Shell to give you instantaneous access to your Azure Resources in any subscription you have access to. All the examples in this blog can found on GitHub at: https://github.com/jquick/azure_shell_inspec_demo

In the following use cases we've exported our subscription ID to an environment variable.

To scan a resource group in your subscription, just call "inspec exec [your profile] -t azure://[your subscription id]" with an Azure resource profile.

In this example we first scan for a resource group which we have the wrong name for, so our tests fail. When we provide the correct resource group name we get our other results back.

The next example shows a more detailed scan of a VM resource in a resource group:

We scan the system here for several different VM resource attributes, so that we can verify our deployment is configured to the specifications our team requested. The results of the InSpec scan show that we've got some changes to make to this VM resource to get it into compliance.

Finally, this example shows you can still use InSpec in Cloud Shell to do remote scans on systems in your environment by providing the appropriate credentials for a machine.

Here we run the DevSec Linux Baseline against our Ubuntu 16.04 VM. This is an empty VM, and it could use some remediation with a Chef cookbook.

Get Started

You can get running with Azure Cloud Shell today by visiting https://shell.azure.com!

We hope you enjoy using InSpec inside of Azure Cloud Shell! We'll be looking to add other tools into Cloud Shell in the near future.

Learn More

To learn more about how to use InSpec and Azure together, check out these resources:

The post InSpec now available in Azure Cloud Shell appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Chef Deepens Support for Google Cloud Platform



----
Chef Deepens Support for Google Cloud Platform
// Chef Blog

Building on the work we announced last fall to help you provision GCP resources with Chef cookbooks, and in honor of ChefConf 2018, Chef and Google Cloud Platform (GCP) have been working together in several exciting ways:

Let's take a deeper look at each of these new developments.

InSpec integration with GCP

In an increasingly complex regulatory environment, many DevOps teams and information security officers struggle to answer important questions:

  • Is our infrastructure deployed and configured as it should be?
  • Can we prove that our deployments are compliant with a growing list of guidelines (CIS, PCI, SOX, HIPAA etc.)?

InSpec by Chef helps you express security and compliance requirements as code and incorporate it directly into the delivery process, eliminating ambiguity and manual processes to help you ship faster while remaining secure.

GCP continues to introduce new ways to protect and control your GCP services and data. This has made it a popular platform for high-profile customers like major motion picture studios, which use GCP for security sensitive workloads such as rendering pipelines for digital assets.

Now InSpec users can continuously test their Google Cloud deployments (regardless of what tool they have used to provision and configure them) for issues like whether a firewall should allow HTTP traffic or whether a storage bucket should be open to the world.

Further, Chef and Google are developing a recommended baseline InSpec profile for securing GCP resources, and will incorporate access to InSpec into Google Cloud Security Command Center for ease of use straight from the Google Cloud Console.

Google Container Registry support in Habitat

Habitat by Chef delivers application automation that helps modern application teams build, deploy, and manage any application in any environment—from traditional data-centers to containerized microservices. In December 2017 Chef announced support for running Habitat applications on Google Kubernetes Engine, to publish your containers via Docker Hub. Learn more about this at the session "How the Habitat-operator Brings Habitat Awesomeness to Kubernetes" on May 23rd at 4:00 p.m. at ChefConf.

Later this summer, Habitat users will be able to build their applications and directly publish these artifacts into Google Container Registry. This integration of Habitat with Container Registry and Kubernetes Engine will enable customers to refactor and re-architect their apps into modern containerized architectures as part of their migration efforts onto GCP.

Provision more GCP resources with Chef

In 2017, we released Chef cookbooks to provision and configure the following GCP services:

Recently, we've also added coverage for the following services:

You can download these individually via Chef Supermarket, or get them all together here.

See you at the show

If you'll be at ChefConf, we'd also love to see you at the Google booth during the event. You can attend the "Let's use Google Cloud Platform (GCP) and Chef" session at 2:00 p.m. on May 24th to learn about using Chef together with GCP's suite of services.

The post Chef Deepens Support for Google Cloud Platform appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Announcing Habitat Builder on-premises and expanded ecosystem integrations



----
Announcing Habitat Builder on-premises and expanded ecosystem integrations
// Chef Blog

We are excited to be making a number of Habitat-related product announcements today at ChefConf 2018. Over the last couple years, our customers have adopted Habitat for two main scenarios: lift, shift, and modernize legacy applications into the cloud or containers, and accelerating adoption of containers for new applications as they move into wider deployment of technologies like Kubernetes. This week's product announcements have a direct connection to these use cases.

First, we are announcing the general availability of Habitat depot behind the firewall. This helps all Habitat users, but particularly those who have legacy or proprietary business applications that cannot be built and published through the Habitat Builder SaaS. Over the last six months we have been working with a small group of customer development partners to bring this capability to life. You can read more about its features here.

Secondly, we are making a number of announcements related to Habitat integrations into the cloud-native and container ecosystem:

  • The newly-updated Habitat Operator for Kubernetes bridges the standard management interface of Habitat services with the Operator model of Kubernetes for container maintenance. It is the recommended way to operate Habitat packages inside Kubernetes and also obviates developers from having to write their own operators for each and every application they deploy into Kubernetes.
  • Habitat Builder can now publish directly to Azure Container Registry (ACR) allowing for one-click continuous deployment of even the most sophisticated applications into Azure Kubernetes Service (AKS). We launched this a few weeks ago at Microsoft Build, so you can read more about it here. Be sure to attend our webinar in a couple of weeks where we will demonstrate how Habitat's build once, run anywhere approach allows you to deliver the same application to Azure Compute Service and AKS with no additional work.
  • The Open Service Broker (OSB) standard, originally created by Pivotal, allows you to bridge applications and services running on different clouds and platforms. We're thrilled to announce a Habitat OSB reference implementation so you can build and ship these packages once, whether they are running on a containerized environment or not.
  • Helm chart integration allows you to export your Habitat-built packages into Helm.
  • Finally, you can now track Habitat application health using an integration with Splunk's HTTP Event Collector (HEC).

Habitat at ChefConf

It's hard to believe that Habitat is not even two years old and yet we have dozens of customers – many of whom are here this week at ChefConf – telling their stories about how Habitat is helping them streamline their development processes and achieve one way to production for applications, no matter what their vintage. You'll hear many of these stories on the main stage. But don't miss some of the breakout sessions where customers are sharing practical lessons learned and problems they are solving with Habitat. And, as usual, you can visit habitat.sh to get started with Habitat.

The post Announcing Habitat Builder on-premises and expanded ecosystem integrations appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Introducing Chef Workstation



----
Introducing Chef Workstation
// Chef Blog

We're excited to announce the release of Chef Workstation, providing everything you need to get started with Chef with a simple one-click installation.

Ad-Hoc Configuration Management with chef-run

Chef Workstation comes with the new chef-run utility, which can be used to execute chef code on any remote system accessible via SSH or WinRM. This provides a quick way to apply config changes to the systems you manage whether or not they're being actively managed by Chef, without requiring any pre-installed software. With chef-run, you can execute individual resources, or pre-existing Chef recipes on any number of servers with a single, simple command.

In the simple example above, we see chef-run used in tandem with InSpec, Chef's compliance automation framework. First, InSpec is checking to see whether our host is configured with the ntp package installed, which is responsible for ensuring server clocks are kept in sync. Since our InSpec profile is reporting a failure, we then use chef-run to install ntp using Chef's package resource, like so:

chef-run -i ~/path/to/sshkey user@host package ntp action=install

Finally, we re-run the previously failing InSpec profile for immediate validation that our update was successfully applied.

Robust Testing & Development Tools

Chef Workstation also includes everything already packaged within the ChefDK. Development tools for testing, dependency resolution, and cookbook generation are all included alongside chef-run, ensuring that whether you're consuming existing chef policies, or creating your own, you have everything you need to get up and running quickly.

Get Started Now

The post Introducing Chef Workstation appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Chef Automate 2 – A Modern Platform for Continuous Automation



----
Chef Automate 2 – A Modern Platform for Continuous Automation
// Chef Blog

We are delighted to announce the general availability of Chef Automate 2, a major upgrade to our continuous automation platform. Chef Automate 2 is the culmination of a nine month re-architecture initiative to improve performance, scale and responsiveness. A refreshed UI allows you to see all infrastructure and compliance events in one interface and most importantly, isolate & debug failures. Building on the new capabilities in InSpec 2, we've enhanced the compliance features in Chef Automate to bring in cloud and network device scanning and made it easier to manage custom profiles. Finally, a true platform API in Chef Automate 2 allows fine-grained data access control and makes possible new integrations with our many partners including ServiceNow, Splunk, Google Cloud Platform, HP Enterprise, and others joining us this week in Chicago.

Install Chef Automate 2 and start a 60 day trial!

Enhanced operational visibility and debugging

Chef Automate 2 provides new tools and visualizations to help users gain the actionable insights they need to detect and correct problems faster. A streaming event feed displays every action taken and helps identify issues. Improved querying capabilities allow for easier and more insightful drill-down into infrastructure and compliance events to uncover the source of problems.

Compliance scanning and reporting in any environment

Since last year's ChefConf, Chef Automate has added significant compliance capabilities to detect and report on issues covering a wide range of environments, compliance benchmarks, and use cases. Chef Automate 2 continues that trend to extend to the cloud and network devices by taking advantage of the latest innovations in InSpec. Chef Automate 2 supports compliance scanning and reporting in AWS, Azure, and Google Cloud Platform environments, as well as against Cisco IOS network devices. This helps organizations take advantage of a single platform to test and secure their entire fleet.

Re-architected for speed and flexibility

Our customers put Chef Automate to the test in demanding, large scale, mission critical environments every day. Over the past year our engineering team has worked closely with customers to ensure Chef Automate meets their demands and is ready to take on the next set of challenges headed this way, including automating fleet sizes of tens of thousands of nodes. Chef Automate 2 features a modern UI built on top of an API-driven microservices architecture, which allows for dramatically faster performance, scale, and true integration points for customers and partners.

Moving forward

Take Chef Automate for a 60-day trial by visiting https://automate.chef.io. Current Chef Automate customers can take advantage of in-place upgrades with automatic data migration from Chef Automate 1.x. For more information, please visit: https://automate.chef.io/docs/upgrade.

The post Chef Automate 2 – A Modern Platform for Continuous Automation appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Chef DK 3.0 Released



----
Chef DK 3.0 Released
// Chef Blog

Today we're delighted to announce the release of Chef DK 3.0. With this release, you can detect and correct issues across more platforms faster than ever before with the addition of Chef 14 and InSpec 2.

Chef 14

Chef 14 brings with it a variety of performance and workflow improvements, as well as nearly thirty new resources native to the Chef DSL. This includes better built-in support for Windows and MacOS management, as well as native management of Red Hat Systems Manager (RHSM) within Chef. For more details on what's new in Chef 14, be sure to check out our release announcement and webinar.

InSpec 2

InSpec 2 introduced the ability to scan more than just servers, with the ability to connect directly to cloud APIs to validate that servers and services alike are configured securely. This release includes resources for Microsoft Azure and Amazon Web Services so that as you take advantage of your cloud vendor's utilities, you can validate their compliance with the same ease and rigor as with homegrown solutions on traditional infrastructure. Combine that ability with performance improvements, and new resources for validating everything from SQL to IIS to Docker containers, and you have the most robust InSpec ever at your fingertips in Chef DK 3! Find out more in our release announcement and InSpec 2.0 Webinar.

Get Chef DK 3 Today

Get hands-on with Chef DK 3.0, as well as past releases, by downloading the installer for your OS from downloads.chef.io.

What's Next?

The post Chef DK 3.0 Released appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Happy Birthday Learn Chef Rally!



----
Happy Birthday Learn Chef Rally!
// Chef Blog

We're excited to celebrate the 1-year anniversary of Learn Chef Rally! It's been a great year for learning Chef, and as part of the celebration, we'll be releasing a limited edition badge at ChefConf next week. I'm not going to show it to you today, but imagine a magical feline celebrating, and you are getting close. The badge will be available to any registered learner that completes a module between May 22 – 25, 2018.

Learn Chef Rally by the numbers

In this first year, 20,000 Chefs have created an account on Learn Chef Rally to track their progress and earn badges. Thousands more are using Learn Chef Rally anonymously. Overall, Chefs have completed more than 25,000 learning modules.

Speaking of badges, more than 10,000 have been awarded and we were excited to see so many of you join the fun. There are currently 18 badges available to earn, in addition to some occasional limited edition badges. Here's a sample of popular badges:

New content every month

Back in September 2017, Thomas Petchel blogged about celebrating 10K registered users with highlights of popular content. We've been adding new content every month since then, so if it's been awhile, I suggest you visit the site again to discover new content and site improvements.

Show off your progress

If you like earning badges and documenting your progress, you should create an account and login whenever you start a new module. You'll be able to see which tracks and modules you've completed, your progress in unfinished tracks and modules, the badges you've collected, and other accomplishments. You'll also be notified when there's new material available.

Learn Chef Rally SWAG Store

As you complete tracks and hit specific milestones in Learn Chef Rally, you'll receive an email with a link to choose a gift from our SWAG store – sunglasses, bottle openers, and notebooks, just to name a few of your options.

Get started on Learn Chef Rally

I encourage you to make your way to Learn Chef Rally soon and sign up for an account before the next badge is released next week. You'll find a lot of helpful content for developing your Chef, Habitat, InSpec, and/or DevOps skills. If you see a gap, please let us know at training@chef.io.

The post Happy Birthday Learn Chef Rally! appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Chef Open Source Community News – April 2018



----
Chef Open Source Community News – April 2018
// Chef Blog

Here's what happened in the Chef, Habitat and InSpec open-source communities over the last month.

Chef

The biggest news from the Chef project is that we released Chef 14, a faster and easier to use Chef. We've already covered all the major changes in a couple of blog posts and a webinar so we won't delve into it again here. We are also on track to release ChefDK 3 by the end of April, which will include both Chef 14 and InSpec 2. Finally, on April 30, we bid a fond farewell to Chef 12, which becomes end-of-life.

Habitat

We released Habitat 0.55.0 near the end of March. This release has a number of new features. Most important is an upgrade to the Launcher, the process manager used by the Habitat supervisor. By design, the Launcher rarely changes unless there are major improvements or bugfixes, so this is a good time to update. The Habitat Builder also gained secrets support so you can inject sensitive information like database passwords or license keys into the build process.

Finally, authorization to Habitat Builder will no longer work using the old GitHub authentication tokens. You should instead generate a Habitat personal access token as mentioned in last month's update. If you are suddenly getting authorization errors interacting with Habitat Builder, this is why.

InSpec

The InSpec team released several new versions over the last month, primarily to add AWS-related resources and enhance parameters that can be matched inside existing resources. We highly recommend upgrading to InSpec 2.1.54 if you are developing InSpec profiles for AWS cloud compliance. The following is a short list of new AWS resources released over the last month:

aws_s3_bucket_object

aws_sns_topics

aws_sns_subscription

aws_kms_key

aws_config_delivery_channel

aws_rds_instance

aws_s3_buckets

aws_route_tables

chocolatey_package

Notable enhancements include the ability to test an AWS account's root user for presence of a hardware or virtual multi-factor authentication (MFA) device, as well as a significant expansion of the interface for fully testing AWS security group egress and ingress rules and the network configuration of your VPC.

Other

We released Test Kitchen 1.21 which moves to using kitchen.yml as the configuration file name rather than .kitchen.yml (with the leading period) for better consistency with the rest of Chef's tools. The older name is still supported for backwards compatibility.

Community MVP

Every month we reward outstanding contributions from one of our community members across the Chef, Habitat and InSpec projects. These contributions do not necessarily need to include code; we want to recognize someone who has dedicated significant time and energy to strengthening our community.

This month we'd like to recognize Romain Sertelon, who is a tireless contributor to the Habitat community. Romain has submitted dozens of pull requests in the last year and provided countless hours of support in Slack. Recently, Romain contributed a substantial amount of work towards the core plans refresh without which would have taken twice as long. The support and code he gives to the community, the thoughtful comments he provides into the request-for-comments (RFC) process, and everything else he does embodies what a stellar community member looks like. In short, Romain works his ass off and the entire Habitat team would like to make sure he knows he is valued by the project.

Thanks for everything, Romain, and we'll be in touch to get your mailing address so we can send you a special token of our appreciation.

The post Chef Open Source Community News – April 2018 appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone