Thursday, July 5, 2018

New Virus Decides If Your Computer Good for Mining or Ransomware [feedly]

New Virus Decides If Your Computer Good for Mining or Ransomware
https://thehackernews.com/2018/07/cryptocurrency-mining-ransomware.html

Security researchers have discovered an interesting piece of malware that infects systems with either a cryptocurrency miner or ransomware, depending upon their configurations to decide which of the two schemes could be more profitable. While ransomware is a type of malware that locks your computer and prevents you from accessing the encrypted data until you pay a ransom to get the decryption

-- via my feedly newsfeed




Wednesday, June 27, 2018

And The Winner Is!…



----
And The Winner Is!…
// Chef Blog

We've had an exciting first half of the year at Chef. Between ChefConf last month and announcing Chef Automate 2.0 in addition to our updates to InSpec and the new Chef Workstation, it's an exciting time for us. We're also extremely pleased by our ongoing industry recognition for our continued innovations in digital transformation from some of the industry's premier award programs. Here's a recap of the awards we've taken home in the first half of 2018:

January 2018

Habitat was named the "Most Innovative DevOps Solution of the Year" for 2017 in the DevOps Dozen, a reflection of some of the great work being done in the DevOps community with our application packaging solution. This list is DevOps.com's take on the best of the best in DevOps. In the same month, Chef Automate was named "Best Cloud Automation Solution" in the Cloud Awards. The Cloud Awards program celebrates the brightest and the best in Cloud Computing. Open to organizations across the globe, the Cloud Awards is the first and largest recognition platform of its kind.

February 2018

AWS OpsWorks for Chef Automate (OWCA) received a Cloud Computing Excellence Award from TMC. The award, presented by Cloud Computing magazine, honors vendors that have most effectively leveraged cloud computing in their efforts to bring new, differentiated offerings to market. Later that month, Habitat won a 2018 BIG Innovation Award. This annual business awards program recognizes the organizations, products and people that are bringing new ideas to life.

May 2018

Chef won a Visionary Spotlight Award for SaaS and Cloud Applications. The awards honor outstanding products, services and deployments across technology categories. The winners of the awards exemplify this goal, showcasing the industry's overall innovation, capacity for future-thinking execution, creativity and feature set differentiation, offering channel partners a cornucopia of opportunities to boost their roles as trusted providers and business success.

June 2018

Chef was recognized in the SD Times 100: 'Best in Show' in Software Development. The SD Times 100 looked for companies that have set a clear path that developers have followed. The winning companies have rained down progress in the form of new projects, innovative technologies and game-changing ideas to delight and amaze the people who develop and ultimately use software. The SD Times 100 represents what they believe to be the best of the best.

Thank you

We are incredibly honored to be awarded for our innovations, and our growth as a company adds to the excitement. We thank our supportive customers who make our innovations stand out, coupled with the hard-working and experienced Chef community who have dedicated themselves to making this a great platform. We look forward to continuing our industry leadership and continuing to deliver our innovations to our community and customers.

The post And The Winner Is!… appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Announcing Chef Support for Amazon Linux 2



----
Announcing Chef Support for Amazon Linux 2
// Chef Blog

Today we are excited to announce Chef's support for Amazon Linux 2. Chef Automate delivers the most comprehensive automation platform spanning tasks from infrastructure to compliance all the way to applications. We do this using Chef, enabling infrastructure automation; InSpec, enabling compliance automation; and Habitat, enabling application automation. Chef is committed to automating the systems our customers are using both today and tomorrow. Our support of customer choice extends across the full spectrum of infrastructure and application stack components so that we help you automate what you want to use, where you want to.

Support for Amazon Linux 2 builds upon our goals to offer more native experiences to customers using AWS, previously evidenced by delivering upon AWS OpsWorks for Chef Automate (OWCA), a fully-managed Chef Automate service that delivers world class configuration and compliance management service.

With this announcement, customers running Amazon Linux 2 as Amazon Machine Images (AMI), container formats, and virtual machines can use their Chef Automate infrastructure to manage Amazon Linux 2, running either within AWS or on-prem deployments.

For a list of supported platforms including Amazon Linux 2 lookup our docs site. You can learn more about Amazon Linux 2 here  and also try out AWS OpsWorks for Chef Automate to easily instantiate a fully-managed Chef Automate deployment.

The post Announcing Chef Support for Amazon Linux 2 appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Tuesday, June 26, 2018

vDisk Replicator Utility



----
vDisk Replicator Utility
// Citrix Blogs

I think everyone that has used Citrix Provisioning Services (PVS) comes away very impressed. The technology provides the ability to update and manage non-persistent XenApp and XenDesktop images in an incredibly efficient way. When it's time to update or deploy …

  

Related Stories


----

Read in my feedly


Sent from my iPhone

Citrix Optimizer 1.2 – What’s New



----
Citrix Optimizer 1.2 – What's New
// Citrix Blogs

Some equations are hard, some equations are easy. I've never been the fan of complex equations and have always preferred to keep things simple (and also was never good at math). But today, I would like to share with you

  

Related Stories


----

Read in my feedly


Sent from my iPhone

Scripts, scripts and more scripts! Package & deploy with Visual Studio Code + the Citrix Developer Extension



----
Scripts, scripts and more scripts! Package & deploy with Visual Studio Code + the Citrix Developer Extension
// Citrix Blogs

If you have been following along with our Citrix Developer tools releases, you may have noticed that we've been increasing our release cadence for the Citrix Developer extension for Visual Studio Code. If you're not familiar with the extension, it …

  

Related Stories


----

Read in my feedly


Sent from my iPhone

Deploy Applications with HPE OneSphere and Chef Automate



----
Deploy Applications with HPE OneSphere and Chef Automate
// Chef Blog

HPE Discover gets underway this week and it's a great opportunity to see how HPE and Chef seamlessly integrate to help companies of all sizes manage their IT infrastructure. Chef and HPE's enterprise systems encourage modern day cloud velocity to get your organization on the path towards digital transformation.

With Chef and HPE, you can add configuration management and compliance to workloads managed by OneSphere, for example:

  • Virtual machines created by OneSphere APIs can be attached to Chef Automate for configuration management
  • Applications can be deployed in an infrastructure agnostic manner with Habitat
  • Virtual machines can be scanned for compliance using Inspec

HPE OneSphere is a hybrid cloud management platform that provides complete enterprise overview spanning private data centers, as well as public clouds like AWS and Microsoft Azure. Insights provided by HPE OneSphere help manage spending for private and public infrastructure. It is easy to get started with its rich set of APIs that enable you to quickly create server deployments for your applications. You can take advantage of the integration between HPE OneSphere and Habitat to easily create deployments that can be managed by Chef Automate server.

Check out our guest post on HPE to walk through a sample architecture that can be easily implemented with use of APIs. It details key steps to efficiently deploy a web application using Habitat such that its infrastructure virtual machines can be connected to Chef Automate server as soon as they are created via HPE OneSphere APIs.

Learn more about Chef and HPE

  • Visit Chef at HPE Discover in Las Vegas at Pathfinder booth #808
  • Attend our sessions at HPE Discover:
    • "Build and manage infrastructures and applications while maintaining compliance using Chef solutions and HPE Synergy"
      • Technical dive: Tuesday at 10am in Discussion Forum 3 and Wednesday at noon in Discussion Forum 1
      • Business perspective: Tuesday at noon in the Pathfinder Theater and Wednesday at 4pm in the Pathfinder Theater
  • Read the Reference Architecture paper: "HPE Reference Configuration for accelerating DevOps with HPE Synergy and Chef Automate"

The post Deploy Applications with HPE OneSphere and Chef Automate appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

“The most ambitious version of you”. Barry Crist’s challenge to the audience at ChefConf



----
"The most ambitious version of you". Barry Crist's challenge to the audience at ChefConf
// Chef Blog

Last month at ChefConf18 in Chicago, Chef CEO Barry Crist kicked off the proceedings with a provocative look at the state of the changes in digital transformation efforts, and asked the question to the crowd: "what comes afterwards?" Barry's keynote focused on the challenges and aspirations faced by both companies and individuals in the industry today. How do companies and individuals keep from being left behind, while outpacing their peers in driving innovation?

This video of Barry's presentation looks at what follows digital transformation, and what is involved as organizations move from automating infrastructures to a focus on application delivery. As companies aspire to be like digital natives such as Amazon and Netflix, it's vital to think about a shift from focusing automation efforts on infrastructure management, into delivering value through applications, faster, and in a more consistent and predictable fashion.

As Barry walks through the risks and opportunities presented by this move to app-centered automation, he issues a challenge to everyone in the DevOps and automation industry: are you bringing your most ambitious self to the table? Listen to Barry as he dissects digital transformation, the shifts in the industry, and his perspectives and call to action for individuals to harness their ambitions to drive change that redefines the state of the art in the industry.

The post "The most ambitious version of you". Barry Crist's challenge to the audience at ChefConf appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Chef Workstation – How We Made that Demo



----
Chef Workstation – How We Made that Demo
// Chef Blog

Editor's Note: ChefConf 2018 'How We Did It' Series

Welcome to final entry in our How We Did It series based on demos from the ChefConf 2018 Product Vision & Announcements keynote presentation. Today, Seth Thomas, Software Development Engineer on our Workstation team, will show us the newest project from Chef, Chef Workstation, and walk us through how that unikitten got from his local workstation to the servers we showed off on stage (and how you can do the same!). In case you missed it, review part one and part two of this series.


Chef Workstation

We've heard loud and clear that our community has needed an easier way to get started with Chef. The Chef Development Kit (ChefDK) has always provided a robust library of tools that we've optimized for quality pipelines and scaling throughout, but this has had the side effect of requiring new users to familiarize themselves with the fundamentals of developing Chef code before they can start tackling simple use cases. To this end, we've introduced Chef Workstation which, as per this RFC, will be supplanting ChefDK in the near future. Initially, Chef Workstation will contain all the features that already exist in the ChefDK, in addition to extra tooling aimed at providing newcomers with easier workflows for simple configuration tasks. Before Chef Workstation can supplant the ChefDK, we want to be sure that we optimize the user experience of each individual tool and make them more cohesive as a unified workflow. The use case we'll be looking at today is based on my live demonstration from our Product Vision & Announcements keynote at ChefConf 2018. We're also pleased to announce that we've included the code from this demo in the chef-workstation repository in the examples subfolder so you can easily follow along with today's blog post.

chef-run

The first new component of Chef Workstation is a utility called chef-run. This tool was created out of the desire to run ad-hoc tasks using chef resources, recipes, or cookbooks without the typical overhead of having to first use knife boostrap to connect your infrastructure to a Chef Server. Instead, chef-run allows you to run those ad-hoc tasks against any systems accessible via SSH or WinRM without requiring you to first configure a Chef Server or pre-install our agent on those machines. To that end, let us construct a use case to highlight the benefits of this new tool.

Use Case

Let us pretend I am a fairly new member of an operations team who has received a helpdesk ticket. The request is to help create an environment for one of the web teams to test a prototype landing page using nginx. Another member of the team has already provisioned 3 servers for me so all we need to do is apply some configuration.

Download and install Chef Workstation

Download instructions and getting started guides are available at: https://chef.sh

To get started, all I need to is to download this single package. Chef Workstation comes with everything I need to get started with Chef.

Simple Resource Execution

First, let's try running a Chef resource by itself which is functionality we've not exposed previously:

chef-run rhel-01 package ntp action=install  

As you can see, we're passing the target, which can be referenced by hostname (above) or IP address, the Resource type of package, the package name ntp and the action install. When we run this command, Chef will detect the running operating system and use the appropriate installation command without us having to concern ourselves with the details of OS-specific utilities like apt, yum, or zypper.

Applying Recipes with chef-run

More complex tasks are often handled by Chef Cookbooks, which allow us to collect multiple resources into recipes, which in turn handle the end-to-end configuration of a multi-step installation process. In this case, I've been provided with a simple cookbook for setting up nginx called deploy_website. Let's run the command to remotely deploy this ad-hoc like:

chef-run rhel-01 deploy_website  

It's worth noting that cookbooks often contain multiple recipes for different aspects of a configuration tasks, but most contain a default recipe for the most common use case. The above example is functionally equivalent to running either of the following:

chef-run rhel-01 deploy_website::default  

OR

chef-run rhel-01 /path/to/cookbooks/deploy_website/default.rb  

Applying Parallel Updates to Multiple Systems

The above example is useful, but where this gets really interesting is when we run this against multiple servers like:

chef-run rhel-01,rhel-02,rhel-03 deploy_website  

Since our servers follow a predictable naming convention, we can even simplify the above command by specifying a range of servers. The below example will take the exact same action, but without our needing to specify each server individually.

chef-run rhel-0[1:3] deploy_website  

Summary and Next Steps

If you'd like to follow along with the examples detailed in this post, the deploy_website cookbook can be found in the chef-workstation project on Github. We're excited to see how you use Chef Workstation to start configuring your environments as you embark on your Continuous Automation journey!

Learn more

The post Chef Workstation – How We Made that Demo appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Running Chef and InSpec with Habitat – How We Made that Demo



----
Running Chef and InSpec with Habitat – How We Made that Demo
// Chef Blog

Editor's Note: ChefConf 2018 'How We Did It' Series

Welcome to part two of our How We Did It series based on demos from the ChefConf 2018 Product Vision & Announcements keynote presentation. In case you missed it, review part one: Habitat and Kubernetes.

Today we'll look at the demo presented by Mike Krasnow, Product Manager on our Infrastructure Automation team. Mike's demo looks at a scenario where we can ensure that our infrastructure is configured consistently and securely by using Habitat, InSpec, and Chef together for end-to-end automation. In this blog post, Customer Engineer John Snow will take us on a guided tour of how it all came together.


Hab Solo: A ChefConf Story

Wow! What an amazing time at ChefConf. What a great community of practitioners and thought leaders coming together to learn and grow. Mike Krasnow's demo showed a powerful example of the complete Chef ecosystem running together in harmony. Like any technical demo making lofty claims, it's easy to be skeptical of just how much of what you saw was an accurate representation of the presenter's kit. Let me assure you that what you saw was 100% real and I'm here to give the technical break down of how it all worked.

The Setup

Our demo environment had a lot of moving parts, and in his introduction, our SVP of Product & Engineering Corey Scobie noted that it would be hard to summarize in a single image. In the first section, we had a Chef Automate 2.0 server, CentOS development node, CentOS production node, webhook execution server (which we will talk about in bit), git repository to store our code, and Mike's trusty laptop.

The Detection

To get things started, we used terraform to build out the development node and production node, install and initialize Habitat as a systemd service, and load the chef-demo/chef-base habitat artifact. We also created a policyfile to define how to validate and harden our nodes that looked like this:

# Policyfile.rb - Describe how you want Chef to build your system.  #  # For more information on the Policyfile feature, visit  # https://docs.chef.io/policyfile.html    # A name that describes what the system you're building with Chef does.  name "base"    # Where to find external cookbooks:  default_source :chef_repo, "../"    # run_list: chef-client will run these recipes in the order specified.    run_list ["hardening::default",            "compliance::default"]  

You will notice that the run-list has both a hardening cookbook to ensure our configurations are secure, and a compliance cookbook which makes use of the Audit Cookbook to run our InSpec scans. This ensured that the nodes that Mike created where compliant. Prior to the demo, we removed the hardening cookbook from the run-list and rebuilt our Habitat artifact. We did this to ensure that our nodes continued to report their compliance state to Chef Automate, but not correct Mike's chaotic key strokes too quickly.

I love it when a plan.sh comes together

One of the core components of the demo was the chef-base habitat package. All Habitat packages start with a plan.sh file to build the application, which in our case was the policyfile shown above. This is the plan file we used:

if [ -z ${CHEF_POLICYFILE+x} ]; then    echo "You must set CHEF_POLICYFILE to a policyfile name."    echo    echo "For example: env CHEF_POLICYFILE=base build"    exit 1  fi    scaffold_policy_name="$CHEF_POLICYFILE"  pkg_name=chef-base  pkg_origin=chef-demo  pkg_version="0.1.0"  pkg_maintainer="The Habitat Maintainers "  pkg_description="The Chef $scaffold_policy_name Policy"  pkg_upstream_url="http://chef.io"  pkg_scaffolding="core/scaffolding-chef"  pkg_svc_user=("root")  

Well that was simple. So we are using a component of Habitat called scaffolding. A Habitat scaffolding is a standardized plan for building a type of application, in this case a Chef policyfile. The great part is that by having this line pkg_scaffolding="core/scaffolding-chef" I don't have to explicitly specify how to build or install my policyfile — I only need to provide some metadata and it will just work. Now I have a plan, but I need to build it. In order to that, I enter my trusty Habitat Studio and run the build command to build the chef-base artifact. Because I have connected my repository to Builder, I can also build it by pushing my code changes to Github. This means that when I merge my change, Builder will automatically build and publish a new artifact to the unstable channel on Builder.

Chef Client run in 12 Parsecs

So even though I am using scaffolding, I think it is still important to talk about how Habitat runs the Chef Client as an application. I may have bypassed the compressor to get to light speed but it's always good to know how something works if I have to troubleshoot later. Lets look at the init, run, and config files to better understand how they work. Just remember that all of this is part of scaffolding so you don't need to write or manage these files.

Init

Let's take a look at the init hook first. The init hook is responsible for initializing my application.

#!/bin/sh    export SSL_CERT_FILE="{{pkgPathFor "core/cacerts"}}/ssl/cert.pem"      cd {{pkg.path}}  exec 2>&1  exec chef-client -z -l {{cfg.log_level}} \    -c $pkg_svc_config_path/client-config.rb

The beauty of Habitat is that all of the code that is written is in a language that I already know: Bash for Linux and PowerShell for Windows. It's easy to write and understand because I simply write the steps that I would take to setup and run the chef-client as a service, the same way I would in a shell script.

The config.rb

One of the features of Habitat is the ability to put configuration files into a config directory, which you can then use in your hooks to help configure your service. The chef-client has a config file where I can set variables to change some of the behavior of Chef. For example, one of these variables is the data_collector.server_url which lets me tell the chef-client what the Chef Automate server url is so it can report it's run status to it. With scaffolding, this part is written for me as well. Here is the client-config.rb file built by the scaffolding:

cache_path "$pkg_svc_data_path/cache"  node_path "$pkg_svc_data_path/nodes"  role_path "$pkg_svc_data_path/roles"  ssl_verify_mode :verify_none  chef_zero.enabled true    unless ENV['BOOTSTRAP']    data_collector.token "{{cfg.data_collector.token}}"    data_collector.server_url "{{cfg.data_collector.server_url}}"  end    ENV['PATH'] = "{{cfg.env_path_prefix}}:#{ENV['PATH']}"  {{#if cfg.data_collector.enable ~}}  data_collector.token "{{cfg.data_collector.token}}"  data_collector.server_url "{{cfg.data_collector.server_url}}"  {{/if ~}}  

You'll notice that I'm using the mustache character syntax which, when the service is loaded, replaces those values with their correct values either from the default.toml contained with the package or a user.toml file located in the Habitat service directory. Here is the default.toml file created by the scaffolding:

interval = 1800  splay = 180  log_level = "warn"  env_path_prefix = "/sbin:/usr/sbin:/usr/local/sbin:/usr/local/bin:/usr/bin:/bin"    [data_collector]  enable = "false"  token = "set_to_your_token"  server_url = "set_to_your_url"  

If I want to change these values, I create a user.toml file in the Habitat service directory, /hab/svc/chef-base/ on Linux. This will build the client-config.rb file with the new values. If you look at the init hook, you can see that I am calling that file with the -c switch. This tells the chef-client how to run my policyfile and report the run status to Chef Automate.

Run

The scaffolding also has a run hook to continuously run the chef-client as a service. Here is the run hook:

#!/bin/sh    export SSL_CERT_FILE="{{pkgPathFor "core/cacerts"}}/ssl/cert.pem"    cd {{pkg.path}}    rm {{pkg.svc_var_path}}/init     exec 2>&1  exec chef-client -z -i {{cfg.interval}} -s {{cfg.splay}} -l {{cfg.log_level}} -c $pkg_svc_config_path/client-config.rb  

You can see that it is similar to my init hook. The main difference is I need to run the chef-client at an interval with the -i switch and a splay with the -s switch to space out when the chef-client runs so they don't overload the Chef Automate server or my nodes. There you have it – a nice Habitat to run a Chef policyfile in.

The Rest of the Story

There is more to this story than just how we got Habitat to run Chef. Specifically, Mike made a change to /etc/shadow and that kicked off a sweet automated process that remediated the problem. So how did that work you ask? Well, one of the great features of Chef Automate is the ability to create a webhook or a Slack notification. These notifications can send an alert based off any chef-client run failure, or as in our case, InSpec compliance failure. After Mike made his change to /etc/shadow, Habitat ran the chef-client, which ran our compliance cookbook, which invoked the audit cookbook, ran the linux-baseline profile, and reported the compliance failure to Chef Automate. Our webhook fired off a notification to a webhook server running a webhook for github, written by Kyleen MacGugan, that merged a pre-staged pull request, which updated the base policyfile to re-add in the hardening cookbook to our node's run-list. As you see in the video, Builder sees that a new change has been merged and kicks off an automated build. Once that is done, the development node's Habitat Supervisor is monitoring the unstable channel for the chef-demo/chef-base package and sees that a new version is now available. The Supervisor then installs the updated package, and kicks of the chef-client run which now contains the hardening cookbook, and that sets the permissions on /etc/shadow back to what they should be. Finally, it kicks off another run of the compliance cookbook and we see it report that the node is now compliant in Chef Automate. So there you have it! Habitat running Chef as a Service with the power of Chef Automate.

Acknowledgements

I want to give a special thank you to Mike Krasnow, Kyleen MacGugan, Adam Jacob, Jon Cowie, Scott Ford, David Echols, and many more who worked on the code for this demo and contributed to making it great. I would also like to thank the Habitat team for all their work.

Learn more

The post Running Chef and InSpec with Habitat – How We Made that Demo appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Habitat and Kubernetes – How We Made that Demo



----
Habitat and Kubernetes – How We Made that Demo
// Chef Blog

Editor's Note: ChefConf 2018 'How We Did It' Series

At ChefConf, we presented our Product Vision & Announcements for 2018. During the opening keynotes, attendees were treated to a live demonstration of Chef's latest developments for Chef Automate, Habitat, InSpec, and the newly launched Chef Workstation. This blog post will be the first of three deep-dive peeks under the hood of each of the on-stage demos we presented. We invite you to follow along and get hands-on with our latest and greatest features to serve as inspiration to take your automation to the next level! Today Nell Shamrell-Harrington, Senior Software Development Engineer on the Habitat team, will take us on a guided tour of using the Habitat Kubernetes Operator to start deploying apps into the Google Kubernetes Engine (GKE). Enjoy!


Habitat and Kubernetes are like peanut butter and jelly. They are both wonderous on their own, but together they become something magical and wholesome. Going into ChefConf, the Habitat team knew how incredible the Kubernetes Habitat integration is and we wanted to make sure that every single attendee left the opening keynote also knowing it. In case you missed it, watch a recording of the demo presentation here

We decided to showcase the Habitat National Parks Demo (initially created by the Chef Customer Facing Teams). This app highlights packaging Java applications with Habitat (many of our Enterprise customers heavily use Java) and running two Habitat services – one for the Java app, and one for the MongoDB database used by the app.

Let's go through how we made this happen!

Initial Demo Setup

In order to set up the demo environment, we first created a GKE Cluster on Google Cloud. Once that was up, we needed to install the Habitat Kubernetes Operator.

One of the easiest ways to install the Habitat Operator is through cloning the Habitat Kubernetes Operator github repo:

$ git clone https://github.com/habitat-sh/habitat-operator  $ cd habitat-operator

That repo includes several example files for deploying the Habitat operator.

GKE required that we use Role Based Access Control (RBAC) authorization. This gives the Habitat operator the permissions it needs manage its required resources in the GKE cluster. In order to deploy the operator to GKE, we used the config files located at examples/rbac

$ ls examples/rbac/  README.md  habitat-operator.yml  minikube.yml  rbac.yml  

Then we created RBAC with:

$ kubectl apply -f examples/rbac/rbac.yml

Then we could deploy the operator using the file at examples/rbac/habitat-operator.yml. This file pulls down the Habitat Operator Docker Container Image on Docker Hub and deployed it to our GKE cluster.

$ kubectl apply -f examples/rbac/habitat-operator.yml

We also needed to deploy the Habitat Updater, which we used to watch Builder for updated packages and pulled and updated to those packages automatically (the real magic piece of the demo).

$ git clone git@github.com:habitat-sh/habitat-updater.git  $ cd habitat-updater  $ kubectl apply -f kubernetes/rbac/rbac.yml  $ kubectl apply -f kubernetes/rbac/updater.yml

At this point, our GKE cluster looked like this:

And we were ready to get National Parks deployed to the cluster!

Deploying the App

We decided to, for the purposes of the demo, go ahead and fork the National Parks app into our own repository.

Then we added in the necessary Kubernetes config files in the habitat/operator directory

$ git clone git@github.com:habitat-sh/national-parks.git  $ cd national-parks  $ ls habitat-operator  README.md  gke-service.yml  habitat.yml

The gke-service.yml file contained the config for an external load balancer, we created the load balancer with:

$ kubectl apply -f habitat-operator/gke-service.yml

Then it was time to deploy the actual application with habitat.yml. This file pulled the Mongo DB container image and National Parks App container image from Docker Hub, then created containers from those images.

$ kubectl apply -f habitat-operator/habitat.yml

Then, our GKE cluster looked like this:

And we could see the running app by heading to the IP address of the GKE service:

So at this point we could easily demo creating a new deployment of the National Parks app – but the real magic would be creating a change to the app and seeing it seamlessly roll through the entire pipeline.

Prior to the demo, I created and stashed some style changes to the National Parks app (which would show up well on a projection screen).

During the demo, I took those files out of the stash

Then committed these changes to the national-parks github repo.

$ git add .  $ git commit -m 'new styles'  $ git push origin master

We had previously connected this github repo to the public Builder. Therefore, any changes pushed to the master branch would automatically trigger a new build on Builder.

We had also previously connected our Builder repo to Docker Hub – as soon as a new build was completed, it automatically pushed a new Docker Container Image of our app to Docker Hub.

Now, all that was left to do was to promote the new build to the stable channel of Builder:

Now the the Habitat Updater came into play. The purpose of the Habitat Updater is to query Builder for new stable versions of Habitat packages deployed within the cluster. Every 60 seconds, the Updater queries builder and, should there be an updated package, it pulls the Docker Container Image for that package from Docker Hub and re-creates the containers running that image.

And then all we had to do was revisit the ip address of the load balancer and we could see our changes live!

The whole purpose of this demo was to showcase the magic of Habitat and Kubernetes. Based on reactions to the demo, we succeeded!

Acknowledgements

Although I had the privilege of running the demo onstage, credit for creating this demo must also be shared with Fletcher Nichols and Elliot Davis – two of my fellow Habitat core team members. It takes a village to make a demo and I am so lucky to have the Habitat core team as my village.

Get started with Habitat

The post Habitat and Kubernetes – How We Made that Demo appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

InSpec GCP Deep Dive



----
InSpec GCP Deep Dive
// Chef Blog

As recently announced Chef has deepened support for Google Cloud Platform (GCP) by adding InSpec integration. Furthermore, the InSpec GCP resource pack is freely available here – suggestions (and contributions) welcome!

Before looking at InSpec GCP let's ensure we have the necessary prerequisites in place. As an aside, we will be following similar setup steps to this excellent introductory article. Note that this is a good time to update to the latest InSpec version as the minimal requirement is 2.2.10. The GCP SDK should also be installed and configured. With that done, let's create an InSpec profile that makes use of inspec-gcp:

$ inspec init profile gcp-inspec-deep-dive  Create new profile at /Users/spaterson/inspec-gcp-deep-dive-profile   * Create directory libraries   * Create file README.md   * Create directory controls   * Create file controls/example.rb   * Create file inspec.yml   * Create file libraries/.gitkeep

In order to save time, an example profile with all the controls in this article is available here. This leverages an attributes.yml file to supply parameters to the InSpec tests. From the root of the profile, these are executed via:

$ inspec exec . -t gcp:// --attrs attributes.yml

Let's start with a simple first use-case. Work with any Cloud provider for long enough and you might be lucky enough to experience a zone (or region) becoming unavailable. This can sometimes create ancillary errors that are difficult to pinpoint. So how might we go about writing a simple InSpec test to confirm all GCP zones are up? For the purposes of this article, let's assume we're already up and running with GCP and have a project called gcp-project to test against. Let's start simply and see what testing one zone looks like. To facilitate this, we use the google_compute_zone resource. 

control 'gcp-single-zone-1' do    title 'Check the status of a single zone'    describe google_compute_zone(project: 'gcp-project',  zone: 'us-east1-b') do      it { should exist }      its('status') { should eq 'UP' }    end  end

Running this with InSpec we see:

This is fine but we had to supply the zone name 'us-east1-b'. What if new zones are added that our company uses immediately, does the test need to be updated? Ideally we want to insulate ourselves from these kind of details and run across all available GCP zones each time. This is how InSpec helps us achieve that with plural resources (more details here):

control 'gcp-zones-all-2' do    title 'All zones should be UP'    google_compute_zones(project: 'gcp-project').zone_names.each do |zone_name|       describe google_compute_zone(project: 'gcp-project', name: zone_name) do       it { should exist }       its('status') { should eq 'UP' }      end    end  end

Let's go through this in more detail. First, we get all the google_compute_zones for the project, then use the zone_name field to get the google_compute_zones resource. This is a good example of using an InSpec plural resource to gather a collection of singular resources that we then test one by one.

Now let's see what happens when we run the above control with InSpec:

(output curtailed as there are 46 zones at the time of writing…)

Of course in practice, the list of zones being checked could be limited to those in use.

Now we can look at a more realistic example.  Let's assume we have a set of firewall rules in use for our GCP project and want to make absolutely sure that the following rules are enforced:

  • SSH on port 22 is forbidden everywhere
  • HTTP on port 80 is forbidden everywhere

Similarly to checking zone status, we can leverage the google_compute_firewalls plural resource to loop over all singular firewall rules in more detail:  

control 'gcp-firewalls-all-3' do    title 'Ensure no SSH or HTTP allowed in firewall rules'    google_compute_firewalls(project: 'gcp-project').firewall_names.each do |firewall_name|      describe google_compute_firewall(project: 'gcp-project', name: firewall_name) do        it { should exist }        its('allowed_ssh?')  { should be false }        its('allowed_http?')  { should be false }      end    end  end

Running this with InSpec, we can easily see whether or not (as shown below) we are in a compliant state:

When any organisation starts to use the public cloud, tags (or for GCP, labels) become an essential way to keep track of what's going on. An efficient tagging strategy is often essential for operational transparency, amongst other things. Let's consider a hypothetical organisation where operational edge-cases such as having VMs with uptime beyond the normal life-cycle are tracked with labels. Following best practices, let's assume processes are in place to automatically remove non-compliant compute resources. Since it's the real world, let's also imagine a label such as "operations_override_do_not_kill" is applied in exceptional circumstances to a running Virtual Machine to make it exempt from automatic termination – sound familiar? Given that adding the tag bypasses the established process, how can we now monitor for compliance? Let's see how we could write a control to discover such non-compliant compute instances:

control 'gcp-all-compute-labels-4' do    title 'Ensure there are no compute instances with operations_override_do_not_kill label in use'    google_compute_zones(project: 'gcp-project').zone_names.each do |zone_name|      google_compute_instances(project: 'gcp-project', zone: zone_name).instance_names.each do |instance_name|        describe google_compute_instance(project: 'gcp-project', zone: zone_name, name: instance_name) do          its('labels_keys') { should_not include 'operations_override_do_not_kill' }         end      end    end  end

This is slightly more involved so let's step through through what's happening.  Supplying only the GCP project name to this control, we first retrieve all available zones and loop over them.  For each zone the list of compute instances are then also retrieved. Next we examine each compute instance individually and confirm whether or not our favorite "operations_override_do_not_kill label is present.

Running this with InSpec in a situation where one machine has this label yields the following sample output:

Hopefully the above demonstrates that with only minimal customization, InSpec continuous compliance checks can be a powerful tool for those using GCP or indeed other cloud providers.  As previously mentioned, the inspec-gcp resource pack welcomes suggestions for improvement or contributions from the community.  The sample compliance profile containing the above checks is available here.

The post InSpec GCP Deep Dive appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

ChefConf 2018 Keynote Video: Product Vision & Announcements



----
ChefConf 2018 Keynote Video: Product Vision & Announcements
// Chef Blog

It's hard to believe ChefConf 2018 was nearly a month ago! We were excited to welcome Chef customers and community members to Chicago and share some of the exciting product developments we've been working over the last year.

Corey Scobie, our new SVP of Product and Engineering, announced the general availability of Chef Automate 2, our enterprise continuous automation platform. Over the last year, we've completely rearchitected Chef Automate from the ground up to provide rich visualizations for operational analytics across your infrastructure, applications, and compliance. Corey and his team also announced several other exciting developments on-stage at ChefConf:

  • The beta release of Chef Workstation, the unified desktop experience for Chef practitioners, aimed at making Chef easier to consume;
  • Availability of Habitat Builder on-premise, as well as many other exciting integrations into the container ecosystem, such as a Helm exporter, support for Azure Kubernetes Service (AKS) as a target, the Habitat Open Service Broker, and more;
  • Integrated cloud compliance features in Chef Automate, building upon the work from InSpec 2.0, and now with Azure and Google Cloud Platform (GCP) support in beta.

Corey also showed off some work we've been doing to explore the relationship between traditional configuration management and how that might be driven from the application's point-of-view instead of the other way around.

You can watch Corey's keynote here in full:

We'd love for you to try out our new products and features:

The post ChefConf 2018 Keynote Video: Product Vision & Announcements appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Understanding Singular and Plural InSpec resources



----
Understanding Singular and Plural InSpec resources
// Chef Blog

InSpec enables you to automate compliance by expressing the expected state of many things. A common need is to find a group of things, then examine them in detail.

If you are using the cloud, you know that cloud security is paramount, but difficult to keep enforced. Even if you have your infrastructure creation automated, it is still possible, through error or malice, for an individual using other means (such as a web UI) to make changes to your infrastructure that may go undetected.

Positive tests – "this thing should exist, and be configured in this manner" are often available using the infrastructure provisioning tools.  But from a compliance and security standpoint, it is often the negative tests that are of more concern – "This and only this should exist; nothing should be configured in this insecure manner."  Provisioning tools are generally not able to detect such things.

InSpec addresses this problem by loosely categorizing its resources into two groups: resources that are able to list all members of a resource type, and resources that are able to examine a resource type in detail. We call these plural and singular resources, and name them accordingly.

For example, in your AWS environment, you may wish to check your firewall rules, called "security groups." InSpec provides an "aws_security_groups" resource that specializes in listing and filtering the security groups, as well as an "aws_security_group" resource that allows you to audit an individual group in-depth.

control "No group should allow in port 22" do    aws_security_groups.security_group_ids do |security_group_id|      describe aws_security_group(security_group_id) do        it { should_not allow_in port: 22 }      end    end  end

InSpec's output will treat this as one control, with one test for each security group. If a security group violates the policy by allowing in port 22, that individual test will fail, with clear output identifying the security group by ID.

But plural resources don't have to enumerate all members of the resource type. All plural resources support powerful filtering mechanisms.

For example, you might want to ensure that all IAM users on your AWS account have multi-factor authentication:

describe aws_iam_users.where(have_mfa_enabled?: false) do    it { should_not exist }  end

This is a common pattern using a plural resource to express non-existence. It is faster than the plural-singular construction above, but the output when failing is less clear (it will indicate that there is at least one user without MFA, but not which). One way around that is this idiom: list the matching usernames, and expect the list to be empty. If the list isn't empty, the offending username(s) will be shown.

describe aws_iam_users.where(have_mfa_enabled?: false) do    it { should_not exist }    its('usernames') { should cmp [] } # Less readable, but failure output is better  end

You can also execute arbitrary code in the where block. Suppose your AWS networking setup is configured such that your application VPCs all have IP address blocks that begin with 10, while your admin VPCs begin with 172. You can select them using aws_vpcs, and have Ruby's IPAddr class perform the subnet calculations.

require 'ipaddr'  # List all application VPCs  aws_vpcs.where     { IPAddr.new('10.0.0.0/8').include?(cidr_block) }.vpc_ids do | vpc_id|    # Find the default security group in the VPC    describe aws_security_group(group_name: 'default', vpc_id: vpc_id) do      # Make sure it does not allow in any traffic by default      it { should_not allow_in() }     end  end

With the recent release of InSpec 2.2, several enhancements have been made to the facility that supports the "plural" resources, improving consistency, performance, and fixing some bugs and unpredictable behavior.  

  • Lazy Column Loading. A plural resource may now defer populating columns until they are accessed. The aws_iam_users resource has been updated to take advantage of this feature, and its performance is greatly improved.
  • Developer Documentation. The support library is now fully documented, allowing project developers and community contributors to understand it more quickly and leverage less-well-known features.
  • Standardized Features. All plural resources now receive the where, raw_data, entries, count, and exist? methods automatically. This reduces copy-pasting when making a new plural resource, and allows InSpec users to leverage prior knowledge: "If it ends in s, I can call where and count…"
  • Several Bug Fixes. Several defects and surprising behaviors have been corrected, primarily around exception handling and validation.
  • Enable Future Self-Documentation. By more clearly differentiating the intent of each property in the resource codebase, we have cleared the path to being able to generate documentation from the code itself.

If you'd like to check out singular and plural resources working together, try out some of these resources:

aws_security_group / aws_security_groups  package / packages  aws_ec2_instance / aws_ec2_instances

The post Understanding Singular and Plural InSpec resources appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

A Look at Deploying an Icinga2 Server with Chef Cookbooks



----
A Look at Deploying an Icinga2 Server with Chef Cookbooks
// Chef Blog

The thought of deploying Icinga with Chef is an exciting prospect. You get all the benefits of the flexible monitoring Icinga provides with all the reliability and scalability of Chef's configuration management. So, when I heard that Icinga had ditched their old monolithic cookbook design, I knew I needed to have a look.

What's new?

Icinga has maintained a monolithic cookbook in the Chef Supermarket since 2014, but they've taken a big step forward toward manageability by breaking it up into four smaller cookbooks for easier management and customizability. Where once there was one, there are now four separate cookbooks for icinga2, icinga2repo, icinga2client, and icingaweb2.

Let's walk through a deployment using these four Icinga 2 cookbooks to see what is required to set up a simple Icinga2 master with web interface.

Setting up the Cookbook

These Icinga cookbooks are fairly straightforward, although there are a few caveats to look out for. The current version of these cookbooks leave the creation of the databases to the user, so we will have to create them ourselves. This can be handled with a standard package/service block to install MariaDB and a few execute resources to set up a user for both Icinga2 and Icingaweb2.

Here's an example installation cookbook:

an example installation cookbook

The default recipes do a good job of setting up everything you need in a fresh Icinga2 server install; however, they do not yet handle the configuration of the web interface. After deploying your cookbook to a node, you will need to visit <icinga-server-fqdn>/icingaweb2/setup to finish the installation.

Configuring the Web Interface

configuration of Icinga Web 2

In order to generate the setup token, you will need to run the following commands from your new Icinga2 master node, and then copy the node to the text box on the web interface.

commands from your new Icinga2 master node

What follows next is a standard setup configuration of the web interface, using the default settings that come with the cookbooks. Passwords and account names can be changed through the attributes file.

a standard setup configuration of the web interface, using the default settings that come with the cookbooks

 

 

These are the default configurations for the two databases set up by the Icinga2 and Icingaweb2 cookbooks. To allow these cookbooks to set up the schema for these databases, the following two attributes must be overridden:

Additionally, if you would like to use the API as your command transport option, you will need to push an API User Object with the correct credentials. This is shown below.

In addition to the installation files, Icinga's cookbooks also provide a number of useful LWRP resources for pushing Icinga2 configuration objects. The one shown above pushes the correct API user.

You can find a complete list of the Icinga2 LWRP here.

Final thoughts

Overall, the installation process is fairly straightforward. You can quickly get an Icinga server up and running with a wrapper cookbook and few lines of code. Moving forward, I hope the configuration of the web interface can be fleshed out and eventually automated.

In addition, it would be nice if the Icinga2 cookbook offered an option for an automatic database installation, though I can understand its exclusion in the name of flexibility.

Interested in learning more?

Shadow-Soft consultants regularly share ideas on our open source blog and at local meetups and events in Atlanta. Contact us if you'd like to talk about using Icinga and Chef together.

The post A Look at Deploying an Icinga2 Server with Chef Cookbooks appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Accelerating time to productivity with Chef Automate using OpsWorks for Chef Automate



----
Accelerating time to productivity with Chef Automate using OpsWorks for Chef Automate
// Chef Blog

At ChefConf 2018 in Chicago last month Arun Gupta, AWS Principal Open Source Technologist, showcased how you can check for compliance of worker nodes in a Kubernetes cluster using OpsWorks for Chef Automate (OWCA). As part of the conference, AWS open sourced the code used; Give it a try! you can download the code here.

Arun Gupta, AWS Principal Open Source Technologist at ChefConf 2018

AWS DevDays

Building on our commitment to customers, we are excited to bring a set of specific AWS DevDay trainings in Berlin (June 12th, 2018), London (June 19th, 2018), Irvine, CA (June 27th, 2018) and NewYork City (July 26th, 2018). AWS and Chef technical experts will deliver these free, full-day, technical training events, wherein developers will learn how to detect potential security and compliance issues in AWS environments, and how to quickly and efficiently remediate these issues. At the London training, we are delighted to be joined by our partner Continoan industry leading transformational technical consultancy. Contino will share best practices from their experience in helping large enterprises adopt DevOps and Cloud.

Chef Automate, encompasses Chef's open source engines InSpec, Chef, and Habitat, giving you the tools to detect, correct, and continuously automate your way to delivering software at the speed of business. Building upon these open source tools, Chef Automate provides built-in compliance assets and real-time data across your estate.

AWS OpsWorks for Chef Automate provides a fully managed Chef Automate installation that enables customers to realize value from their Chef deployments on AWS quickly, by provisioning the Chef Automate environment and offloading the overheads involved in running and maintaining their Chef Automate (and Chef Server) deployments to AWS.

Register for a training in Berlin, London, Irvine,CA or NewYork City. We would love to see you there!

The post Accelerating time to productivity with Chef Automate using OpsWorks for Chef Automate appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Sunday, June 3, 2018

Xen Orchestra 5.20



----
Xen Orchestra 5.20
// Xen Orchestra

Xen Orchestra 5.20

Our monthly release is here, with a bunch of new features and bug fixes. Time to update and discover what's new!

XCP-ng updates

Xen Orchestra is now able to update your XCP-ng hosts directly from the UI! This is a major improvement in XCP-ng usability, without sacrificing our promise to move closer to the upstream: we still rely on yum like any other CentOS, but we can now also do that from XO!

Xen Orchestra 5.20

Read our blog post to learn how it works.

UI improvements

Better usage reports

Usage reports now filter useless objects and more clearly display the evolution of each resource (host CPU, RAM, etc.)

Xen Orchestra 5.20

List useless VM snapshots

When you remove a backup job, the snapshots associated aren't removed. To avoid a long hunt of those useless snapshots, we are now able to detect them automatically in the "Health view", so you can remove them in one click!

HA advanced options

You can now configure the XCP-ng/XenServer HA behavior for each VM: "restart", "restart if possible" and "disable". For more details, please read the documentation on HA.

Show control domain in VDI list

We now display the Control Domain if a VDI is attached to it. This is helpful to understand what's happening on your storage.

Set a remote syslog host

You can now set a remote syslog host directly from the UI:

Xen Orchestra 5.20

This feature is really useful when you want to centralize all your hosts logs somewhere, for various reasons:

  • compliance
  • security
  • log analyze/parsing
  • and more!

Backup

Backup concurrency

A new option is available: you can decide to have a max concurrency of a backup process per VM. For example, you could decide to have maximum of 10 VMs backup in parallel. By default, there is no limit ("0" in the text field) BUT we still enforce maximum values. Please read our blog post regarding backup concurrency in Xen Orchestra.

Xen Orchestra 5.20

Improved backup logs

A lot more details in the backup logs! Now, each step is individually visible with its duration, current status etc.

Xen Orchestra 5.20

Now, you can know in real time which steps failed (snapshot, transfer, merge) and how long which one succeeded.

Improved backup reports

Same story for backup reports, it's far more detailed:

Xen Orchestra 5.20

Retry a single failed VM backup

In a job log view, if a VM failed to be backup, you can retry it now. Very handy when it was for a specific reason, like protecting the VDI chain or one failure within hundreds of succeeded VMs: no need to restart the whole job!


----

Read in my feedly


Sent from my iPhone