Tuesday, June 26, 2018

Running Chef and InSpec with Habitat – How We Made that Demo

Running Chef and InSpec with Habitat – How We Made that Demo
// Chef Blog

Editor's Note: ChefConf 2018 'How We Did It' Series

Welcome to part two of our How We Did It series based on demos from the ChefConf 2018 Product Vision & Announcements keynote presentation. In case you missed it, review part one: Habitat and Kubernetes.

Today we'll look at the demo presented by Mike Krasnow, Product Manager on our Infrastructure Automation team. Mike's demo looks at a scenario where we can ensure that our infrastructure is configured consistently and securely by using Habitat, InSpec, and Chef together for end-to-end automation. In this blog post, Customer Engineer John Snow will take us on a guided tour of how it all came together.

Hab Solo: A ChefConf Story

Wow! What an amazing time at ChefConf. What a great community of practitioners and thought leaders coming together to learn and grow. Mike Krasnow's demo showed a powerful example of the complete Chef ecosystem running together in harmony. Like any technical demo making lofty claims, it's easy to be skeptical of just how much of what you saw was an accurate representation of the presenter's kit. Let me assure you that what you saw was 100% real and I'm here to give the technical break down of how it all worked.

The Setup

Our demo environment had a lot of moving parts, and in his introduction, our SVP of Product & Engineering Corey Scobie noted that it would be hard to summarize in a single image. In the first section, we had a Chef Automate 2.0 server, CentOS development node, CentOS production node, webhook execution server (which we will talk about in bit), git repository to store our code, and Mike's trusty laptop.

The Detection

To get things started, we used terraform to build out the development node and production node, install and initialize Habitat as a systemd service, and load the chef-demo/chef-base habitat artifact. We also created a policyfile to define how to validate and harden our nodes that looked like this:

# Policyfile.rb - Describe how you want Chef to build your system.  #  # For more information on the Policyfile feature, visit  # https://docs.chef.io/policyfile.html    # A name that describes what the system you're building with Chef does.  name "base"    # Where to find external cookbooks:  default_source :chef_repo, "../"    # run_list: chef-client will run these recipes in the order specified.    run_list ["hardening::default",            "compliance::default"]  

You will notice that the run-list has both a hardening cookbook to ensure our configurations are secure, and a compliance cookbook which makes use of the Audit Cookbook to run our InSpec scans. This ensured that the nodes that Mike created where compliant. Prior to the demo, we removed the hardening cookbook from the run-list and rebuilt our Habitat artifact. We did this to ensure that our nodes continued to report their compliance state to Chef Automate, but not correct Mike's chaotic key strokes too quickly.

I love it when a plan.sh comes together

One of the core components of the demo was the chef-base habitat package. All Habitat packages start with a plan.sh file to build the application, which in our case was the policyfile shown above. This is the plan file we used:

if [ -z ${CHEF_POLICYFILE+x} ]; then    echo "You must set CHEF_POLICYFILE to a policyfile name."    echo    echo "For example: env CHEF_POLICYFILE=base build"    exit 1  fi    scaffold_policy_name="$CHEF_POLICYFILE"  pkg_name=chef-base  pkg_origin=chef-demo  pkg_version="0.1.0"  pkg_maintainer="The Habitat Maintainers "  pkg_description="The Chef $scaffold_policy_name Policy"  pkg_upstream_url="http://chef.io"  pkg_scaffolding="core/scaffolding-chef"  pkg_svc_user=("root")  

Well that was simple. So we are using a component of Habitat called scaffolding. A Habitat scaffolding is a standardized plan for building a type of application, in this case a Chef policyfile. The great part is that by having this line pkg_scaffolding="core/scaffolding-chef" I don't have to explicitly specify how to build or install my policyfile — I only need to provide some metadata and it will just work. Now I have a plan, but I need to build it. In order to that, I enter my trusty Habitat Studio and run the build command to build the chef-base artifact. Because I have connected my repository to Builder, I can also build it by pushing my code changes to Github. This means that when I merge my change, Builder will automatically build and publish a new artifact to the unstable channel on Builder.

Chef Client run in 12 Parsecs

So even though I am using scaffolding, I think it is still important to talk about how Habitat runs the Chef Client as an application. I may have bypassed the compressor to get to light speed but it's always good to know how something works if I have to troubleshoot later. Lets look at the init, run, and config files to better understand how they work. Just remember that all of this is part of scaffolding so you don't need to write or manage these files.


Let's take a look at the init hook first. The init hook is responsible for initializing my application.

#!/bin/sh    export SSL_CERT_FILE="{{pkgPathFor "core/cacerts"}}/ssl/cert.pem"      cd {{pkg.path}}  exec 2>&1  exec chef-client -z -l {{cfg.log_level}} \    -c $pkg_svc_config_path/client-config.rb

The beauty of Habitat is that all of the code that is written is in a language that I already know: Bash for Linux and PowerShell for Windows. It's easy to write and understand because I simply write the steps that I would take to setup and run the chef-client as a service, the same way I would in a shell script.

The config.rb

One of the features of Habitat is the ability to put configuration files into a config directory, which you can then use in your hooks to help configure your service. The chef-client has a config file where I can set variables to change some of the behavior of Chef. For example, one of these variables is the data_collector.server_url which lets me tell the chef-client what the Chef Automate server url is so it can report it's run status to it. With scaffolding, this part is written for me as well. Here is the client-config.rb file built by the scaffolding:

cache_path "$pkg_svc_data_path/cache"  node_path "$pkg_svc_data_path/nodes"  role_path "$pkg_svc_data_path/roles"  ssl_verify_mode :verify_none  chef_zero.enabled true    unless ENV['BOOTSTRAP']    data_collector.token "{{cfg.data_collector.token}}"    data_collector.server_url "{{cfg.data_collector.server_url}}"  end    ENV['PATH'] = "{{cfg.env_path_prefix}}:#{ENV['PATH']}"  {{#if cfg.data_collector.enable ~}}  data_collector.token "{{cfg.data_collector.token}}"  data_collector.server_url "{{cfg.data_collector.server_url}}"  {{/if ~}}  

You'll notice that I'm using the mustache character syntax which, when the service is loaded, replaces those values with their correct values either from the default.toml contained with the package or a user.toml file located in the Habitat service directory. Here is the default.toml file created by the scaffolding:

interval = 1800  splay = 180  log_level = "warn"  env_path_prefix = "/sbin:/usr/sbin:/usr/local/sbin:/usr/local/bin:/usr/bin:/bin"    [data_collector]  enable = "false"  token = "set_to_your_token"  server_url = "set_to_your_url"  

If I want to change these values, I create a user.toml file in the Habitat service directory, /hab/svc/chef-base/ on Linux. This will build the client-config.rb file with the new values. If you look at the init hook, you can see that I am calling that file with the -c switch. This tells the chef-client how to run my policyfile and report the run status to Chef Automate.


The scaffolding also has a run hook to continuously run the chef-client as a service. Here is the run hook:

#!/bin/sh    export SSL_CERT_FILE="{{pkgPathFor "core/cacerts"}}/ssl/cert.pem"    cd {{pkg.path}}    rm {{pkg.svc_var_path}}/init     exec 2>&1  exec chef-client -z -i {{cfg.interval}} -s {{cfg.splay}} -l {{cfg.log_level}} -c $pkg_svc_config_path/client-config.rb  

You can see that it is similar to my init hook. The main difference is I need to run the chef-client at an interval with the -i switch and a splay with the -s switch to space out when the chef-client runs so they don't overload the Chef Automate server or my nodes. There you have it – a nice Habitat to run a Chef policyfile in.

The Rest of the Story

There is more to this story than just how we got Habitat to run Chef. Specifically, Mike made a change to /etc/shadow and that kicked off a sweet automated process that remediated the problem. So how did that work you ask? Well, one of the great features of Chef Automate is the ability to create a webhook or a Slack notification. These notifications can send an alert based off any chef-client run failure, or as in our case, InSpec compliance failure. After Mike made his change to /etc/shadow, Habitat ran the chef-client, which ran our compliance cookbook, which invoked the audit cookbook, ran the linux-baseline profile, and reported the compliance failure to Chef Automate. Our webhook fired off a notification to a webhook server running a webhook for github, written by Kyleen MacGugan, that merged a pre-staged pull request, which updated the base policyfile to re-add in the hardening cookbook to our node's run-list. As you see in the video, Builder sees that a new change has been merged and kicks off an automated build. Once that is done, the development node's Habitat Supervisor is monitoring the unstable channel for the chef-demo/chef-base package and sees that a new version is now available. The Supervisor then installs the updated package, and kicks of the chef-client run which now contains the hardening cookbook, and that sets the permissions on /etc/shadow back to what they should be. Finally, it kicks off another run of the compliance cookbook and we see it report that the node is now compliant in Chef Automate. So there you have it! Habitat running Chef as a Service with the power of Chef Automate.


I want to give a special thank you to Mike Krasnow, Kyleen MacGugan, Adam Jacob, Jon Cowie, Scott Ford, David Echols, and many more who worked on the code for this demo and contributed to making it great. I would also like to thank the Habitat team for all their work.

Learn more

The post Running Chef and InSpec with Habitat – How We Made that Demo appeared first on Chef Blog.


Read in my feedly

Sent from my iPhone

No comments:

Post a Comment