Wednesday, May 24, 2017

Chef Announcements: Lighting the Path from Here to There



----
Chef Announcements: Lighting the Path from Here to There
// Chef Blog

Technology shifts, such as the moves toward cloud-native applications and container-first architectures, make it clear that tomorrow will look very different from today. But how to get from here to there? Today Chef announced a host of new product features, partner offerings, and skill-building opportunities that, taken together, light a path for enterprises to follow.

At Chef, we're helping our customers on their journey to continuous automation, developing the ability to deliver software at speed, with efficiency and low risk. Chef's continuous automation platform, Chef Automate, helps teams build, deploy, manage, and collaborate across all aspects of software production: infrastructure, applications, and compliance. The organizations we see adopting this approach are simplifying and providing consistency to their software delivery regardless of the environment or shape of application.

Three themes in today's announcement offer guidance to those who are ready to move faster on their continuous automation journeys: continuous compliance, application-centric operations, and skill building.

Continuous compliance is necessary to deliver at speed

As applications deconstruct into microservices, development teams deliver more quickly but risk multiplying security vulnerabilities just as quickly. What's needed is an approach that integrates information security into the software development lifecycle. With today's announcements, Chef makes compliance automation an integral part of the software deployment cycle, across any environment. Chef Automate now features extensive compliance automation capabilities, including powerful reporting features tailored to the different roles that must work together to deliver secure software. Additionally, InSpec can now test, interact with, and audit cloud platforms including AWS, Microsoft Azure, and VMWare vSphere. These capabilities give enterprises the tools they need to build security into software from the start, and help Information Security teams align with their DevOps counterparts. Organizations no longer have to make trade-offs between speed and security as they move to modern application architectures.

READ THE WHITE PAPER

Application-centric operations to support an application-centric world

As enterprises prioritize rapid software delivery, the stresses on DevOps teams reach well beyond security and compliance. Today, Chef announced a number of advancements to Habitat that make it easier for teams to build, deploy, and manage both modern and legacy applications across hybrid environments. The new Habitat Builder service helps ensure consistent packaging, so any application can run anywhere. The service helps organizations manage applications without worrying about underlying infrastructure and dependency complexity and change. To make application management even easier, Habitat now ships with an initial set of 20 Habitat plans for common enterprise application components, as well as scaffolding features for popular application languages and frameworks. With them, developers can quickly create Habitat packages. At ChefConf, Chef is previewing early work to integrate Habitat with Automate, further strengthening support for continuous automation.

TRY HABITAT

Focus on building skills to navigate change

Moving from here to there means people's skills must move from here to there as well. With Learn Chef Rally, Chef has created an experience designed to accelerate skill-building for all Chef practitioners, regardless of their starting point. Just as the DevOps movement is driven by practitioners who are always willing to learn new skills, so too does continuous compliance and an application-centric posture rely on skill development. Those organizations that gain expertise the fastest and align their teams accordingly will capture the largest share of the opportunities offered by a cloud-native, container-first world.

GET STARTED WITH LEARN CHEF RALLY

Learn More

To learn more about today's announcements and how Chef can help you move forward on your path, be sure to check out the following:

The post Chef Announcements: Lighting the Path from Here to There appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Enterprise Ready Habitat Plans Now Available



----
Enterprise Ready Habitat Plans Now Available
// Chef Blog

Today at ChefConf 2017 we are excited to launch a new program for enterprise content around Habitat and announce a first set of "enterprise ready" Habitat plans. The plans are all open source and serve as best practice references to guide you as you write plans for your own applications. For example, the PostgreSQL Habitat plan demonstrates how a highly-available stateful application can be deployed with replication and failover. The WordPress Habitat plan shows you how to deploy Nginx as a proxy in front to provide load-balancing.

Enterprise Ready Habitat Plans

What does it mean to be "enterprise ready"? Generally speaking, these plans:

  • Support the way software is deployed in the enterprise. For instance, you can easily deploy in clustered and high-availability topologies.
  • Come with out-of-the-box support for common enterprise patterns such as backup/restore. For instance, we integrate tools such as Wal-E and Stark & Wayne's Shield product into the PostgreSQL and Redis plans for easy backup and restore.
  • Hide the complexity. These plans get you up and running quickly through clear documentation, common patterns, standard practices, and examples of how to integrate them with your own Habitat plans to create full-stack applications. (The WordPress and Drupal plans are both great multi-tier application examples.)

Recognizing Barriers to Using  Habitat

Habitat enables you to deploy your application to any platform coupled with the automation you'll need to manage it. Whether you're packaging a new application in Habitat or moving an existing application into Habitat as part of a replatforming effort, you'll have dependencies on third-party software products such as databases, messaging systems, application servers, analytics pipelines, and monitoring systems.

Packaging and running software in enterprise scenarios also has a myriad of specific requirements and considerations. A few things to think about (from a very long list) include clustering, high-availability, secure deployment, integrated backup, complete documentation and runbooks.

Too often, someone has written the Habitat plan for the application itself but not thought about plans for everything else you need. So a natural barrier to moving your application quickly into Habitat is the broad set of plans that probably need to be written. At Chef we've recognized these barriers, and wanted to provide a robust set of plans to reduce the friction of getting started with Habitat.

Open Source Collaboration

The Chef open source community has been collaborating on this sort of content for many years in the form of cookbooks. We at Chef are proud of this Chef ecosystem and the quality of the contributions. Knowing we had such great material at hand, we began our new program by reviewing the 100 most popular cookbooks on our community portal, Chef Supermarket, and identifying which of them to translate into Habitat plans.

We then worked with four of our partners, Container Solutions, Endocode, Fast Robot and Stark & Wayne, to produce the plans. Together, we selected applications that took advantage of their deep domain expertise and that they commonly see in their consulting engagements. We built out the plans as full stacks to show how you can take all the parts of your application and package it with Habitat. Our initial offerings cover five areas:

  • Big Data – Cassandra, Spark, Storm, Kafka, Zookeeper, CrateDB
  • Monitoring – Prometheus, Grafana
  • Middleware – Websphere, Mulesoft, Varnish, RabbitMQ, Consul
  • Databases – PostgreSQL, MySQL, Redis, Shield
  • Developer and Content Tools – Jenkins, Drupal, WordPress

All the plans are now available in the Habitat core-plans github repository for you to look at, download and use.

Learn More at ChefConf

Drop by the Habitat zone at ChefConf on Tuesday afternoon for demos by our partners who, along with myself, look forward to seeing you and answering your questions. ChefConf also has six talks on Habitat, including real world stories about how companies such as GE and Media Temple are leveraging Habitat.

The post Enterprise Ready Habitat Plans Now Available appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Compliance automation: Bridging the gap between development & information security



----
Compliance automation: Bridging the gap between development & information security
// Chef Blog

Speed is nothing without control

DevOps makes software deployment faster. But without proper controls, that may mean that developers are also releasing security vulnerabilities more quickly. The fast pace of innovation will not be slowing down and the pressure to deliver rapidly keeps increasing. At the same time, cybersecurity threats keep getting more innovative while doing more damage in a shorter amount of time. Organizations have to learn how to ship software quickly but without compromising their exposure to risk.

The problem is most organizations see this as a tradeoff. Either they can focus on speed and lose safety, or vice versa. The solution is to stop treating information security as a bolt-on afterthought. Organizations can scale both speed and safety by extending Agile, Lean, and DevOps (ALDO) principles to their information security teams. InfoSec teams need to adopt automation tools that build security into the development cycle.

DevOps is the new operating model

When applied, ALDO principles build high-velocity organizations with streamlined processes and flexibility to respond quickly to any situation. Continuous delivery puts those principles into practice in service of shipping software faster, safer, and more reliably.

Should your organization practice continuous delivery and follow ALDO principles? Most organizations already understand the value of moving fast so the response to that question is obvious. But when you ask those same organizations if they can deliver software continuously and still remain compliant with information security standards, their response is anything but obvious. That's because most information security teams don't have the tools to move at high velocity.

In our latest Compliance Automation white paper, we deliver a view of the current state of Information Security. We also examine the differences between development postures and security postures that create the perceived trade-offs between speed and safety, use industry data to examine how high-performing IT organizations bridge these gaps to scale both, and we explore an example workflow that illustrates how a cohesive solution to this problem comes together.

GET THE WHITE PAPER

The post Compliance automation: Bridging the gap between development & information security appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Scaffolding: Build and package apps



----
Scaffolding: Build and package apps
// Chef Blog

"The best plan is to barely have one at all." – Ancient Habitat proverb

Over the past several months we've been working on a feature of the build system that focuses on developers which helps them bundle up their applications to be run, managed, even updated with Habitat. We call this Scaffolding.

Scaffolding helps developers build and package their applications which follow the common patterns and practices for their application type. In other words, the more your application follows the conventions, the more Scaffolding helps you.

We're launching this feature with two Scaffolding packages: one for Ruby-based apps and one for Node.js-based apps. In the near future, we plan on adding support for Go, Python, and JVM based apps but we believe we have a good enough start to share this build pattern more widely.

Scaffolding's Assumptions

First, this feature is obsessively focused on the application developer's experience. As a result what we are concerned with are developers building their own applications. Habitat is developing a good track record for building other third-party software (for example: PostgreSQL, Bash, Rust, etc.), but as a developer, I want Habitat to be great at building my software. We expect and assume that you want your Habitat Plan to live alongside your codebase, in the same version control repository, etc.

Secondly, a Scaffolding tries to make the best possible build experience for your app type by looking for common application patterns and practices that exist today. If you've already deployed your app to a Platform-as-a-Service provider (PaaS) or packaged your app in a container, chances are your application follows some or most of the practices from Heroku's Twelve-Factor App manifesto. Scaffolding exploits these same conventions so much like the Ruby on Rails framework, when you follow the conventions, you are rewarded with less configuration and setup.

Detection

Scaffolding unlocks a powerful new behavior at build time: the ability to detect and react to the needs of an application codebase. Let's look at an example to see this in action. The output lines below are from building a Ruby on Rails web application called "habirails". The Plan which builds this app's package is very simple and contains one new build variable: pkg_scaffolding.

pkg_name=habirails  pkg_origin=fnichol  pkg_version=0.1.0  pkg_scaffolding=core/scaffolding-ruby

When we build this Plan, we'll see some of the following in the build output:

   habirails: Detected Rails 5 app type

The Ruby Scaffolding understands some specific Ruby web frameworks such as Ruby on Rails and Rack. In this case it has detected a Rails 5.x application and can use that knowledge later on.

   habirails: No Ruby version detected in Plan or Gemfile.lock, using default 'core/ruby'

There are canonical locations where Ruby developers select a specific version of Ruby, one of them uses the rubykeyword in a Gemfile. The Ruby Scaffolding loads the project's Gemfile and Gemfile.lock (using Ruby itself and calls the Bundler codebase as a library) so that it can correctly parse this information. In this case no version was specified in the Gemfile or in the plan.sh, so a default Habitat package of core/ruby was chosen.

   habirails: Detected 'nokogiri' gem in Gemfile.lock, adding libxml2 & libxslt packages

Some RubyGems have native extensions or require other software to be present so this Scaffolding inspects the Gemfile.lock for some common gems that are used by the community, including execjs which requires Node.js, sqlite3 which requires SQLite shared libraries, and nokogiri as shown above. In nokogiri's case, we build this against system libraries in Habitat packages so this gem typically takes a second or two to install.

   habirails: Detected 'pg' gem in Gemfile.lock, adding postgresql package

Similar to above, this detection will add the appropriate PostgreSQL Habitat packages, but will create an optional bind for the package which lets your app discover its database in a Habitat ring. If you start the app service without a --bindoption, the package will fall back to requiring database host and port configuration settings meaning that you can point your app at an existing database that lives outside a Habitat ring.

   habirails: Installing dependencies using Bundler version 1.14.6

The Ruby Scaffolding knows how to use Bundler to install and vendor RubyGem dependencies for use in a production environment. The exact version of Bundler which is used is also vendored into the app's package so there is one less runtime Habitat package dependency to install and only one version of Ruby is pulled in for production.

   habirails: Detected and running Rake 'assets:precompile'

In this case, the rake RubyGem was detected in the Gemfile.lock, a Rakefile was present in the project's root directory, and an assets:precompile Rake task was found. This is default behavior for Ruby on Rails applications but also commonly used in other Rack-based applications and even static site generators. If the correct project markers are found, the Scaffolding code takes over.

   habirails: No user-defined init hook found, generating init hook

Based on the app detection above, the Ruby Scaffolding can generate a suitable init hook which checks to see that the Rails' secret key base is set and will even test your database connection–all before the Supervisor even attempts to boot the app itself.

There are a lot more features and goodies that Scaffolding packages provide for your app and it is worth reading the reference docs for the Ruby and Node.js implementations.

Next Up: More Languages

From the initial concept to early prototypes through to the first two releases supporting Ruby and Node.js, we've narrowed in on how Scaffolding should work and more importantly how it should feel: effortless. We're planning on developing and updating a few more Scaffolding implementations in the very near future which will help us find the common abstractions and behavior that can be shared.

Try It!

Will a Scaffolding help you build and package your application? There's only one way to find out–jump in and see. If you want to see Scaffolding work on a small Node.js app, check out the Packaging an App from Scratch with Scaffolding blog post. Any questions or feedback are most welcome on our community Slack. Happy packaging!

The post Scaffolding: Build and package apps appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

InSpec launches support for cloud platform assessments



----
InSpec launches support for cloud platform assessments
// Chef Blog

InSpec Cloud Modules

We are proud to announce the release of three new open-source incubation projects to the InSpec community: inspec-aws, inspec-azure, and inspec-vmware. With these projects, InSpec extends its reach into these widely used platforms. You can now use InSpec for cloud native security and integration testing.

When we first released InSpec, we focused on infrastructure nodes (compute) and their operating systems. The goal was to support all platforms, from legacy systems to modern runtimes: Linux, Windows, OSX, AIX, HPUX, and more. The extensible InSpec runtime allows us to add additional support for these platforms as they add more capabilities and as they emerge.

However, the goal for InSpec has always been to go beyond testing host operating systems. To that end, InSpec is being redesigned so that you can test your entire fleet-wide application infrastructure. While host-based testing on platforms such as AWS, Azure, and VMware was already possible, it was limited to that one aspect of the runtime.

With these releases InSpec reaches beyond operating systems to the additional components that drive them: APIs, network objects, storage components, and orchestrators. on AWS, Azure, and VMware cloud platforms. InSpec can now cover additional perspectives that are essential to de-risking your infrastructure and application automation.

These three projects provide a preview of a shift in how InSpec will support additional platforms. In the next few months, InSpec will introduce support for the different perspectives required for holistic testing by extending functionality for arbitrary platforms into the core framework. That new functionality is expected to ship with the InSpec 2.0 release scheduled for later this year.

InSpec Community

In the meantime, we invite you to use these modules to assess the state of operations and information security in your cloud infrastructure. As always, we look forward to your feedback via the InSpec community, as well as the Slack channels and GitHub Issues for each project.

These projects would not have been possible without the contributions from our fantastic community members. Thank you all so much for your coding and testing. We deeply appreciate your dedication to quality while we work through many exciting use cases! Many additional interesting extensions and use cases are still waiting to be added. We invite you to participate in this journey by joining the InSpec community.

AWS

InSpec-AWS provides common resources needed to test objects in Amazon Web Services (AWS). It connects to the AWS API and exposes multiple services that are available. For example, you can test that certain objects exist and are configured in a certain way.

control "aws-1" do    impact 0.7    title 'Checks the machine is running'      describe ec2('my-ec2-machine') do      it { should be_running }    end  end

Azure

InSpec-Azure provides common resources needed to test objects in Microsoft Azure. It connects to the Azure API and contains methods to test different services.

control 'azure-1' do    impact 1.0    title 'Checks that the machine was built from the correct image'      describe azure_virtual_machine(name: 'example-01', resource_group: 'MyResourceGroup') do      its('sku') { should eq '16.04.0-LTS' }      its('publisher') { should ieq 'Canonical' }      its('offer') { should ieq 'UbuntuServer' }    end  end

VMware

InSpec-VMware exposes many components of VMware vSphere/ESX and allows you to test that these components are configured as you expect. A number of interfaces are available including host, switch, and firewall configuration.

control "vmware-1" do    impact 0.7    title 'Checks that soft power off is diabled'    describe vmware_vm_advancedsetting({datacenter: 'ha-datacenter', vm: 'testvm'}) do      its('softPowerOff') { should cmp 'false' }    end  end

The post InSpec launches support for cloud platform assessments appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Builder: Habitat Build Service



----
Builder: Habitat Build Service
// Chef Blog

When we launched Habitat last June we also announced that we had begun development of a build service which, in conjunction with the Habitat Supervisor, would round out our vision for the future of continuous integration and deployment of applications. Today we are pleased to announce the general availability of our open source build server – Builder – for open source projects.

What Is Builder?

If you've visited this site before today and clicked on sign-in or search packages you've actually already seen and used Builder, albeit a much earlier iteration, but the service has been live since last June. Builder performs three jobs for the Habitat community – it builds packages, hosts them, and helps you discover them.

Market Place & Artifact Repository (The Depot)

The goal of our initial offering of Builder was to provide just enough functionality to fulfill discovery and an API for the Supervisor and Habitat CLI clients for storing and retrieving artifacts. Since these were the only two features available at launch, Builder colloquially became known as the "The Depot" after its artifact repository component.

Hosted Build Service

In today's milestone release we finally enable the namesake feature of Builder: the ability to build packages from plan definitions from a GitHub repository and publish them back to the public Depot. This feature is now available for all packages in white-listed origins starting with the core origin. If you're working on an open source project that you'd like to package for Habitat we'd love to add your origin to the white-list to enable build functionality for your packages too! Send an email to humans@habitat.sh to apply for access.

Why Build Builder?

There are plenty of great build servers to choose from which cover every combination and flavor of monetization, licensing, and hosting option, so why build another build service? The answer lies within a principle of our packaging system itself.

Habitat packages are immutable, atomic, and isolated: you cannot change their contents; the services they contain execute in their own process environment and don't share filesystem resources; and they are either fully installed and available on their host machine or they are not. It is not possible through regular use of Habitat to have a partially installed or modified Habitat package installation. The immutability of these packages also extends to what they are linked to.

Dependency Linking

Habitat packages dynamically link to explicitly defined dependencies and they will only ever link to the exact version of their dependency they were built against when the package was generated. This gives Habitat packages the feeling of static linking but significantly reduces compilation times by allowing you to download your dependencies as pre-built, GPG signed, checksummed packages and dynamically link to them.

We chose this approach as a means to answer the problem of isolation in multi-tasking operating systems. Containers accomplish this with an equally effective strategy by ensuring that all software intended to be in a container be included at its build time.

COOL ALERT: An added benefit of isolating software at the packaging layer means it's incredibly easy to have software which depends on different versions of core libraries such as glibc on the same machine at once. Cool

The Habitat packaging system guarantees a smooth installation and unimpeded runtime experience because packages are immutable, atomic, and isolated, but these deploy and runtime benefits come at a build time cost, summed up in Dependent-Builds

Dependent Builds

Given the following three packages and the scenario:

Package-C -> Package-B -> Package-A  Package-C -> Package-A

Package-C depends on Package-B which, in turn, depends on Package-A. Package-C also depends on Package-A. This sample above is a simple and small example of a dependency graph. Packages to the left of each -> are known as a Dependent while packages to the right are a Dependency.

If Package-A was rebuilt and published to the public Depot and then you immediately tried to build a new version of Package-C, it would probably fail due to a dependency mediation error. Package-C has a lenient enough dependency on Package-A for the latest build of Package-A to be fulfilled. However, since Package-B has not had time to rebuild with the latest version of Package-A the version that Package-B was built against would conflict with Package-C's desire to have the latest of Package-A. To solve this problem we created Builder.

SUPER DUPER: It's especially super duper that Habitat requires all dependencies to be remediated because it ensures that your application is up to date with the latest bug fixes and security fixes

EXTRA DUPER: In every process started by Habitat you know exactly what packages are present and which ports they open

Automatic Dependent Rebuilds

The driving force for getting Builder done was to ease the pain of "rebuilding the world." This is the phrase the team has used when something deep in the global package graph needs to be rebuilt. core/glibc is a likely culprit to force this scenario.

To accomplish a world rebuild before Habitat was ready, we would ask an engineer to babysit a set of executing scripts in a very large EC2 instance which traverse the dependency graph from the base branching up and outwards to rebuild every piece of dependent software. In serial :sad_emoji:.

From this need we created the concept of Automatic Dependent Builds which, given the scenario in Dependent-Builds, when Package-A was uploaded would have automatically queued a build for both Package-B and Package-C with Package-B happening before Package-C. If we were to use a larger dependency graph:

Package-C -> Package-B -> Package-A  Package-C -> Package-A  Package-D -> Package-A

In this scenario Package-B and Package-D would rebuild in parallel before Package-C.

KIND OF A BIG DEAL: Yeah, yeah, so if Package-A was rebuilt due to a critical bug or security vulnerability then your application would automatically be rebuilt for you. And if you're using channels… well, you could be wake up in the morning with a pretty fly acceptance environment that's been updated overnight without any intervention from you. Just connect all of an environment's supervisors to the channel these packages will be published to start the newly rebuilt package with an update-strategy. Once you've woken up, had some coffee, and seen that your acceptance environment's Habitat supervisors have successfully downloaded the new package, passed their smoke tests, their health checks, and their compliance tests, you can promote that package from your acceptance channel to your production channel, and see the magic happen again in that environment.

Architecture

Builder performs its various duties through a set of services written and assembled into a service-oriented architecture. Each component performs a particular job at a particular point of the lifecycle of a request.

builder-architecture-diagram

Inceptional: Builder's services are all run by the Habitat Supervisor and Builder is self hosted. That is, Builder's build service produces packages of Builder's services and publishes those packages to itself. The Habitat Supervisor running the Builder services is configured with the rolling update strategy and to a receive updates from a Depot that is running in its Supervision Ring.

Request Dispatch

Requests originate from a Web-UI and through a Gateway or directly at a Gateway. All requests are translated into our internal messaging format and communicated over ZeroMQ sockets using a binary protocol generated with Protocol Buffers through a Message Router.

The Message Router's job is to act as a sorting facility by peeking at the envelope of a message and sending it not only to the appropriate back-end service, but also to the correct instance of that back-end service. In Builder, some of our back-end services are sharded, which means that every message contains some information which acts as a hint for the router to randomly – but deterministically – send each message to the correct instance of a back-end service.

Job Dispatch

Let's say a GitHub hook has been triggered because the master branch of your project has just been pushed to. This GitHub hook sends a build request to the Builder API Gateway for the package associated with that GitHub hook's payload. The message goes to the Job Service and a job request would be enqueued for a Worker to rebuild your project and publish to the Depot. A response from the Job Service confirming that the job has been successfully queued is sent back to the Builder API Gateway which translates the response into a RESTful HTTP response.

Studio Worker

A pool of ephemeral Workers are connected to the Job Service waiting to accept work. These workers are connected to OS specific queues and only accept work for their OS. When a Worker receives a new job it creates a new Habitat Studio and clones the source tree for the package's Plan into it.

Once the Plan is in the Studio's workspace the package is built and the Worker will perform any additional post-processing steps configured for the Plan's job and then publish the built package to a particular release channel on the Builder cluster's Depot. Additional publishing steps may also be configured. For example, you may want to export your package into a container format and then upload the container to its native artifact repository.

The Job Service is notified of success or failure from the Worker and it's ready to pick up it's next job. Maybe it will pick up a rebuild of a dependent package of the software it just published to the Depot… spoooooky.

Next Up: Builder Neighborhood

This is the next big iteration of Builder, but there's still a lot more that we want to tackle. Starting after ChefConf we'll be digging into a feature to allow developers and operators to run a configuration of Builder services in their own Personal Neighborhood. This Neighborhood can optionally be connected to another to subscribe to update notifications for packages in it's dependency graph packages that are in its global package graph. This will allow closed-source projects to be built with all the same benefits of the open source offerings we're releasing today.

If you're interested in tracking our progress on Builder Neighborhoods you can do so by checking out our project tracker or our roadmap. Both Habitat and Builder are open-source and we'd love to have you join the development efforts!

The post Builder: Habitat Build Service appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Learn Chef Rally: New learning site for Chef practitioners



----
Learn Chef Rally: New learning site for Chef practitioners
// Chef Blog

I'm excited to announce the launch of our new learning site for Chef practitioners, Learn Chef Rally.

What exactly is Learn Chef Rally? It's a rethinking of how we help our community of practitioners – both newbies and grizzled veterans – to start and continue learning about Chef, continuous automation, compliance, and DevOps. It serves as a central location for discovering hands-on, self-paced learning materials.

CHECK IT OUT

We can't wait for you to get started.

This initial release is the beginning of a long-term initiative to help people advance their Chef skills in logical and meaningful ways – from accomplishing specific tasks to mastering the important cultural concepts that fuel DevOps.

What You'll Experience on Day 1

Here's what you'll get on your first day.

Self-Paced, Hands-On Tracks and Modules

For our debut, you can pick from a menu of 11 tracks and over 50 modules. A track groups related learning activities. You can think of it as a curriculum. For example, there's an Infrastructure Automation track. A module is a specific activity where you accomplish  a particular task. For example, there's a Build an Ohai Plugin module.

There's no predetermined path to follow. You can jump in and choose your own adventure, whether that's sampling a selection of modules from different tracks  or settling on a single track and going from beginning to end. You do what you gotta do.

This intro video provides a great overview what to expect and how to get started:

Log In to Track Your Progress and Earn Badges

For those that like earning badges and documenting their progress, you can log in and create a personal account using auth for various platforms such as GitHub and Google. In your profile, you'll be able to see which tracks and modules you've completed, your progress in unfinished tracks and modules, the badges you've collected for finishing tracks, and other accomplishments. You'll also be notified when there's new material available.

Learn Chef Rally Badges

If you don't create an account, you still have access to all the learning materials that everyone else does but we won't keep a record of your progress.

Sharing of Learning Materials, Your Latest Accomplishments, and Your Profile

See some content or topic that you think others would find valuable? Want to show your peers that you just polished off another track? Want to highlight your profile and all of your accomplishments? You can share all of this via the usual social platforms.

More to Come

We've got a lot more in the works. Look for more learning content, more badges, more features and special edition badges and swag. We're also cooking up a few things for Chef training and Chef certifications.

And, of course, there will be more food-related wordplay and designs.

Get Started

Be sure to sign in and complete the Getting Started track to earn your first badge! (It takes only a few minutes.) Plus you'll earn the limited edition Grand Opening badge. Get it while it's hot!

The post Learn Chef Rally: New learning site for Chef practitioners appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Chef Community Engineering – Quarterly Update



----
Chef Community Engineering – Quarterly Update
// Chef Blog

We're excited to be in Austin, TX this week for ChefConf. Today we're hosting a Community Summit so it's a great time to reflect on some of the amazing things the community and community engineering team at Chef completed in the first three months of the year!

Supermarket

The Chef Supermarket is here to make it easy to be successful with Chef through sharing the successes of a community of practitioners. Use the public Supermarket to collaborate with the community or install your own private Supermarket and collaborate with your co-workers.

Cookbook Engineering

The cookbook engineering team is responsible for managing and modernizing the Chef-managed cookbooks that are published to the Supermarket. These cookbooks serve as a guide for some good practices for developing cookbooks and help automate common infrastructure components.

Open Source Projects

Chef is based on open-source software and a thriving community of contributors is important to the continued success of the project. Lots of PRs were merged and there was at least one release per week across a number of different tools in the ecosystem, test-kitchen, ChefSpec, foodcritic, and the like.

Come together!

As a community we gather online and in-person. The first three months of the year saw growing participation in both Slack and our mailing list.

InSpec

The community around compliance automation is really starting to take off, too. In the first three months of the year there were eleven first-time contributors to InSpec and more compliance profiles were published to the Supermarket.

Habitat

Habitat continues to see very rapid growth and adoption!

Remember, if you are unable to join us in Austin, you can live stream the ChefConf keynotes on Tuesday and Wednesday.

If you're at the conference, come find me for a hug!

The post Chef Community Engineering – Quarterly Update appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Running Habitat Applications on Kubernetes



----
Running Habitat Applications on Kubernetes
// Chef Blog

Containers continue to take the IT world by storm and container orchestration platforms are a major part of how you build and ship software in a modern way. Kubernetes is a great example of the growth of this segment, and has emerged as one of the leaders of this space. Chef has been actively involved in this space, and as a continuation of this work, we are happy to announce a few patterns that help you deploy Habitat built applications to Kubernetes.

Management Supervisor Ring

The first pattern we've defined is the concept of a management supervisor ring. Applications launching on Kubernetes need a common end point to bootstrap themselves upon launch. In order to run the management supervisors, we've defined a Daemon Set in Kubernetes that runs an "empty" service. The service is simply there to provide the necessary Habitat Supervisor API  end points to query the health of the ring.  The management supervisor ring also provides a Kubernetes Service that new applications can use as a common end point to peer to. You can find the yaml required for spinning up the Service and Daemon Set in this Github repo.

Create Habitat Management Ring on Kubernetes

Labels from Elections

One of the benefits of using Habitat for your containers is the built in ability for containers to self organize into a clustering topology. Containers within the Habitat service group will automatically know which of its peers are a leader or follower based on the election the supervisor performs. This information may need to be propagated to other systems. For example, with Kubernetes you might want to label the pods running Habitat built containers with the results of an election to route traffic from Services.

The Habitat Supervisor allows you to to take action once an election is complete. Using a Redis plan as an example, a Habitat election will generate a config file with the label to apply to a Pod. Before the Redis service is started, kubectl is called to label the Pod, then Redis starts with the appropriate configuration.

You can find the Redis example in this Github repo, along with the Redis Habitat plan, and a Habitat plan for kubectl. Eventually kubectl should be available as a core plan.

Create Redis Deployment on Kubernetes

Special Thanks

Over the past several months, we've been working with a few partners to make Habitat better. Container Solutions has provided lots of guidance around running Habitat applications on Kubernetes and has been publishing their experience working with Habitat. Specifically I'd like to thank Maarten Hoogendoorn and Pini Reznik. I'd also like to thank Chef Community member Nick Leli who gave me some good guidance on how to best run Habitat Supervisors on Kubernetes  (use a Daemon Set).

Learn More

If you want to learn more about Habitat and how it simplifies the container experience you can watch my talk from Kubecon (below). You can also find me at ChefConf where I'll be talking more about running Habitat applications on Kubernetes.

The post Running Habitat Applications on Kubernetes appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

GoCD Cookbook and the Chef Partner Cookbook Program



----
GoCD Cookbook and the Chef Partner Cookbook Program
// Chef Blog

I'm happy to announce that GoCD is now part of the Chef Partner Cookbook Program. They have certified the GoCD Cookbook.

This cookbook deploys a standard GoCD server or agent with the required settings to get it ready for your usage.

GoCD is an on-premise, open source, continuous delivery tool by ThoughtWorks. GoCD provides better visibility into and control of your team's' deployments. With GoCD's comprehensive pipeline modeling, you can model complex workflows for multiple teams with ease. And its Value Stream map lets you track a change from commit to deploy at a glance. Commercial support and enterprise add-ons, including disaster recovery, are available. To learn more, visit www.gocd.io.

The Chef Partner Cookbook Program is a collaboration between Chef and the vendor to help validate cookbooks in our public supermarket.

Congratulations to GoCD and ThoughtWorks!

The post GoCD Cookbook and the Chef Partner Cookbook Program appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Continuous Automation: Measuring Digital Transformation



----
Continuous Automation: Measuring Digital Transformation
// Chef Blog

The IT industry has spoken: digital transformation is here. Having a digital transformation strategy is the difference between disrupting or being disrupted. But transformation is hard and it doesn't happen overnight. Without clear goals, it's difficult to know if you're making progress when you spend your days in the tactical trenches. The problem is most organizations don't know which indicators to track. Set clear metrics based on proven industry data to ensure you're actually moving the needle forward.

Continuous Improvement

Traditional approaches to business no longer work as they don't meet the expectations of the market and are ripe for disruption by more nimble competitors. Leaders in digital transformation are accepting that digital (or the 'app') is the new customer interface and are focused on delivering experiences to that interface. Which means more apps, doing more things, faster. Companies that embrace these shifts outperform their peers in terms of revenue growth, operating profit growth, shareholder return growth, and a host of other measures. But in order to get there, the pursuit of outperformance in digital transformation for a business means following a philosophy of Kaizen, or continuous improvement.

Cross-Functional Teams

An example of the changing role of IT organizations

Digital transformation is here. In previous posts, we've touched on how ALDO (Agile, Lean, DevOps) principles help realign teams to fix organizational tension. Many organizations are in some phase of a transformation initiative. The Chef Survey 2017 results illustrate the changing role of IT organizations. But how does an enterprise manage to successfully navigate their way through an initiative with such large ambitious goals?

Setting the right metrics

Effective leadership at scale means setting proper context and giving your teams the knowledge they need to make decisions and act autonomously. Disruptive leaders win by distributing expertise that enables teams to make decisions with a high degree of both decision quality and decision velocity. The hard part is distilling large complex initiatives into small, easy to convey, and easy to understand guidelines to follow.

Quantifying Outcomes with key metrics

In our short-form interactive webinar series, "Quantifying DevOps Outcomes", we explored methodologies for distributing expertise by orienting around measurable outcomes that encourage behaviors that eventually change organizational culture. The metrics to substantiate the delivery of those outcomes are gleaned from industry measures like the State of DevOps report.

Applying that to the Continuous Enterprise

Ultimately, continuous automation, and the ALDO processes to support improved software delivery should be measurable. Those who lead digital transformation successfully have pioneered significant technical and cultural shifts that let them deploy applications quickly, efficiently, and with minimum risk. Quantifying outcomes of transformation as improvements in Speed, Efficiency and Risk management provide a practical set of measurable guidelines to use in determining whether a proposed project, strategy, or implementation details serves to move your organization toward success. Process changes can be objectively assessed to determine their impact as part of your continuous improvement approach.

For an in-depth look at how these metrics drive transformation initiatives that help your company outperform, check out the four part webinar series on Digital Transformation and the competitive edge: (Part 1) (2) (3) (4).  For a closer look at how measurable outcomes play a critical role your overall strategy, download our white paper: Continuous Automation for the Continuous Enterprise.

The post Continuous Automation: Measuring Digital Transformation appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Monday, May 15, 2017

Detecting the WannaCry Exploit with InSpec [feedly]

Detecting the WannaCry Exploit with InSpec
https://blog.chef.io/2017/05/15/detecting-wannacry-exploit-inspec/

-- via my feedly newsfeed

As you may have read about in the news, an exploit called "WannaCry" has been circulating and infecting Windows systems across the globe.|

 The "WannaCry" exploit is a particularly nasty type of exploit called "ransomware"; once installed, the malware encrypts your files and holds them hostage until you pay hundreds of dollars in ransom fees.

This is a very serious exploit that has affected operations at a number of well-known businesses. Microsoft has released a number of hotfixes that patch the vulnerability. Anyone that uses Windows is strongly encouraged to run Windows Update on their systems as soon as possible.

Detecting the Exploit with InSpec

While running Windows Update regularly is good practice and should mitigate the vulnerability on your systems, it's important to know if any of your fleet is vulnerable to this exploit, and it's critically important to assure that Windows Update properly installed the required hotfixes. InSpec, our solution to expressing compliance-as-code using a human-readable and executable language, can be used to scan a fleet remotely and report on its compliance.

We have released a wannacry-exploit InSpec profile on the Supermarket, and the source is on GitHub. This profile can be used with InSpec to scan a host and determine if the hotfixes necessary to mitigate WannaCry have been installed:

inspec exec supermarket://adamleff/wannacry-exploit --target winrm://Administrator@HOSTNAME --password AdministratorsPassword

InSpec does not require the installation of any software on the target host in order to properly scan it for compliance; as long as you have credentials to log in to the remote host, InSpec can scan for its compliance status.

Chef Automate users can use the audit cookbook and add this profile to the list of profiles executed as part of their normal Chef Client runs. Each node will report back its compliance findings, including the newly-added WannaCry exploit profile, to Chef Automate which can be used to see compliance status in a fleet-wide view.

Detect More Vulnerabilities

Scanning for known exploits is just one of the tasks InSpec can help with. InSpec offers a number of built-in resources for checking the state of your fleet, its configuration, and more. Read more about InSpec at www.inspec.io.

The post Detecting the WannaCry Exploit with InSpec appeared first on Chef Blog.

Achieving Windows Compliance with InSpec



----
Achieving Windows Compliance with InSpec
// Chef Blog

This blog post is a follow-up on our Windows Compliance with InSpec webinar by Joe Gardiner, Senior Solutions Architect and Christoph Hartmann, InSpec Creator that was presented live on April 11, 2017. In that webinar, we describe what Continuous Compliance is and we cover assessment with InSpec and remediation with Chef. This post provides additional material to help you learn more.

Continuous Compliance

In case you missed the webinar, "continuous compliance" is a methodology for automating assessment and remediation of compliance policy with separation of duties for each part of that management cycle. Configuration management tools–like Chef, Puppet, or Ansible–can automate the remediation of compliance violations and InSpec allows you to automate the assessment. What makes this automation "continuous" is moving beyond ad-hoc manually driven periodic assessments and remediation events. To that end, the audit cookbook integrates both assessment and remediation into the same automatic event (a chef-client run) while maintaining the separation of duties critical to most information security standards.

The webinar dives into further detail. Here are some additional materials to help you get started with continuous compliance for Windows systems.

DevSec Baselines

The InSpec development team regularly contributes to DevSec, an open-source project that provides compliance baselines anyone can use. The DevSec hardening framework provides two Windows benchmarks that will help you get started:

Compliance profiles in Chef Automate

If you have a subscription to Chef Automate, that includes access to Chef-maintained and supported profiles that track the CIS industry standard.

  • CIS Microsoft Windows 7 Benchmark
  • CIS Microsoft Windows 8 Benchmark
  • CIS Microsoft Windows 10 Enterprise (Release 1511) Benchmark
  • CIS Windows Server 2012 Benchmark
  • CIS Windows Server 2012 R2 Benchmark

Custom InSpec profiles

The same technology powering the baselines above, InSpec, also allows you to create and implement your own assessments to automate compliance solutions specific to your needs. InSpec allows you to write your own profiles from scratch, import existing profiles as dependencies, and gives you constructs to pick and choose content from those dependencies so that you can quickly model overlay solutions when (for example) your company's adopted information security policies deviate from industry standards.

InSpec contains a large set of built-in resources to automate assessments as well as the ability to create new custom resources. For some additional tips on creating custom profiles for Windows, check out this blog post from December.

Windows remediation with Chef

That covers the assessment cycle in-depth, but what about remediation? Chef integrates wonderfully with Windows systems. But when considering remediation options it is important to be aware of existing tools in the Microsoft eco-system and defer to the proper one depending on your use case. Should you use Group Policy or Powershell DSC?

Group Policy

Group Policy is best suited to managing policy on workstations and for controlling patching policy via the WSUS client. It's not suitable for non-domain joined machines or for programmatic control of devices.

Powershell DSC

DSC is most suited to server baseline builds, automation (DevOps), and applying Compliance remediation policy. It is important to note that existing DSC resources can be mapped into the Chef DSL very easily following the same resource structure as in the example below:

dsc_resource 'NAME' do    resource :service    property :name, 'NAME'    property :startuptype, 'Disabled'    property :path, 'D:\\Sites\\Site_name\file_to_run.exe'    property :ensure, 'Present'    property :state, 'Stopped'    end

A more in-depth guide to help you choose is available in this TechNet blog post.

Additional resources

Between the webinar, InSpec tutorials, and the content above you should have enough to get started down the path of Continuous Compliance for Windows. Once you're ready for more, these links may provide some additional useful places to dig in.

The post Achieving Windows Compliance with InSpec appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone