Sunday, December 17, 2017

Upgrade to XenServer 7.3



----
Upgrade to XenServer 7.3
// Xen Orchestra

Upgrade to XenServer 7.3

XenServer 7.3 is now available. Let's see how to upgrade (or update!) it.

Some free features are removed in 7.3. Please read this before upgrading.

Xen Orchestra is 100% compatible with XenServer 7.3.

From XenServer 7.2

If you are running XenServer 7.2, the process will be only an update, not an upgrade.

You need to download the update "pack". You can do it directly from your pool master:

$ wget http://downloadns.citrix.com.edgesuite.net/13372/XenServer-7.3.0-update.iso  

Then, still on your pool master, you can deploy the update:

$ xe-install-supplemental-pack XenServer-7.3.0-update.iso  

From older XenServer

This is the standard upgrade procedure. You can upgrade your previous XenServer version directly to 7.3 since XenServer 6.2, 6.5, 7.0 and 7.1.

If you are using XenServer 7.2, please read the previous section.

ISO download

You can fetch the ISO here: http://downloadns.citrix.com.edgesuite.net/13371/XenServer-7.3.0-install-cd.iso

The ISO can be burn to a CD but see the next section for USB install.

Install from USB

From any Unix/Linux:

dd if=XenServer-7.3.0-install-cd.iso of=/dev/sdX bs=8M status=progress oflag=direct  

Replace sdX with the name of your USB key.

On Windows, use a dedicated program that can write ISO to USB drives.

Partitioning

It's exactly the same as all previous XenServer 7.x version (see our previous blog post: upgrade to XenServer 7.1):

  • / (root) 18GB
  • /boot/efi 512M
  • /var/log 4GB
  • Swap 1GB

Are you upgrading from an older version than XenServer 7.0? Or do you have the old partition scheme? Please follow instructions in our previous blog post to switch to the new one.

Rolling pool upgrade

If you have a pool with multiple hosts, there is some basic rules to follow:

Always upgrade the pool master first:

  1. Migrate VMs from your pool master to slaves
  2. Upgrade the pool master
  3. Migrate VMs from one slave to the pool master
  4. Migrate this slave
  5. Etc.

You can always live migrate VMs from an older XenServer to a newer. The opposite IS NOT POSSIBLE.

Also, always check to empty your VM CD drives and disable HA during the operation.


----

Read in my feedly


Sent from my iPhone

XenServer 7.3



----
XenServer 7.3
// Xen Orchestra

XenServer 7.3

It's out! Latest CR version of XenServer is available. So what's new there?

What's new?

XenServer 7.3

Removed features

If you don't have a licensed version of XenServer, upgrading/udpating to this version will remove those features:

  • Xen storage motion
  • Dynamic Memory Control
  • Basic GPU Passthrough
  • Pool size limited to 3 hosts max
  • and more, details here

If you need a license, you can contact us (live chat of our website), as we are a Citrix partner and reseller. We'll help you about licensing.

Added features

If you have an Enterprise license for XenServer, you'll have:

  • Efficient multicast support via IGMP snooping
  • Support for NVIDIA Pascal graphics cards
  • Nested virtualization for Bromium Secure Platform
  • Changed Block Tracking

We reviewed the CBT features: in short, the only advantage vs current Delta backup, is that you can remove the reference snapshot (and leave only the metadata). But it doesn't solve the main problem about block based storage that aren't thin-provisioned anyway. Because you still need to create the snapshot at the first place.

However, there is interesting features hunder the hood of CBT: nbd protocol, which will be probably useful for Xen Orchestra in the future, to have more flexible fetch of VM content.

Right now, it doesn't change anything: Xen Orchestra is fully compatible with XenServer 7.3!

Should I upgrade?

Depends:

  • If you are on 7.1 LTS, and happy with it, stay.
  • If you are on 7.2 without license, you'll lose some features.
    • If those are too important for you, you can stay in 7.2
    • Or buy a license

----

Read in my feedly


Sent from my iPhone

phpIPAM version 1.3.1 released



----
phpIPAM version 1.3.1 released
// phpIPAM IP address management

New version of phpipam (1.3.1) released.
----

Read in my feedly


Sent from my iPhone

pfSense 2.4.2-RELEASE-p1 and 2.3.5-RELEASE-p1 now available



----
pfSense 2.4.2-RELEASE-p1 and 2.3.5-RELEASE-p1 now available
// Netgate Blog

We are excited to announce the release of pfSense® software versions 2.4.2-p1 and 2.3.5-p1, now available for upgrades!


----

Read in my feedly


Sent from my iPhone

Habitat Updates



----
Habitat Updates
// Food Fight

Nell Shamrell-Harrington, Tasha Drew, and Jamie Winsor discuss the latest updates to Habitat!

Panel

Show Notes

Picks

Nell

Tasha

Jamie

Download


The Food Fight Show is brought to you by Nathen Harvey and Nell Shamrell with help from other hosts and the awesome community of Chefs.

The show is sponsored, in part, by Chef.

Feedback, suggestions, and questions: info@foodfightshow.com or http://github.com/foodfight/showz.


----

Read in my feedly


Sent from my iPhone

ChefConf 2018 CFP: Application Automation Track



----
ChefConf 2018 CFP: Application Automation Track
// Chef Blog

ChefConf is the largest Chef community reunion and educational event for teams on the journey to becoming fast, efficient, and innovative software-driven organizations. In other words, you and your team!

ChefConf 2018 will take place May 23-26 in Chicago, Illinois and we want you to present! The ChefConf call for presenters (CFP) is now open.

One of the tracks you might consider proposing a session for is the Application Automation track.

Application Automation

The cries for digital and cultural transformation can be heard from every corner of the business world. Everyday technologists are becoming more concerned with delivering customer and business value. How are your teams empowered to deliver this value to production? Teams are adopting the tooling and practices necessary embrace cloud-native technologies, move into the brave new world of containers, orchestrations, cloud infrastructure, and serverless solutions. In the meantime, some legacy applications are being lifted out of the data center and shifted to the cloud. Habitat is a simple, flexible way to build, deploy, and manage modern distributed applications.

Application automation is the term we use to describe the processes used to build, deploy, and manage these applications.

Share your story of using Habitat and related technologies to manage the lifecycle of your team's applications. Below are some ideas and questions to consider.

Understanding distributed systems

Topologies, service discovery, consistency, availability, partitioning, and more! Working with distributed systems means learning about new concepts and terms. The Habitat ecosystem addresses many of these concerns making it easier to implement and leverage them within applications.

  • What does everyone getting started with distributed systems need to know?
  • How does Habitat address and enable each of these concepts?
  • Can you demonstrate how your application uses one or more of these?

Containers, containers, containers!

Containers provide many benefits to modern application teams. Being able to run the same artifact in many different environments simplifies delivery pipelines, increases confidence, and allows teams to deliver value faster. But containers alone may not be enough. Scaling out containers and running production workloads often requires additional technologies like a container scheduler or platform as a service. Habitat provides the capability to export artifacts into a number of different formats including Docker images, Cloud Foundry images, and more. Using the Habitat builder service, you can automatically publish these containers to Docker Hub or Amazon's Container Registry.

  • How has Habitat's application-first approach changed your container build process? The size and shape of your container?
  • How are you understanding the provenance and lineage of your containers? In other words, "what's in the container?"
  • Which export formats are you utilizing for Habitat? Why and how?
  • What container orchestrators, schedulers, or platforms are you utilizing? Why and how?
  • Have you considered building a custom export format? What formats would you add? How would you approach building that?

A better habitat for …

Many applications frameworks have mature notions of packaging applications. Java applications, for example, are often packaged as .jar or .war files that are ready to be run inside of a java runtime. In other frameworks, such as Ruby on Rails, the idea of building an artifact is foreign to most of the community. Habitat allows you to create packages and simplify the deployment and management of any application framework. Not everything we build or run is an application framework, either. What about persistent data stores or other services?

  • Share your story of packing specific application frameworks with Habitat (Java, Rails, Node, PHP, Python, etc).
  • Share your story of packaging and running distributed databases with Habitat (PostgreSQL, MySQL, MongoDB, etc).
  • How has a common packaging format impacted your delivery platforms across various application frameworks?

Putting the "Dev" in DevOps

The word "DevOps" has always started with "Dev" yet many participants in the community have a deep background in operations.  Habitat aims to bring better automation capabilities to developers and make the DevOps tent larger so that everyone has a place. This also means more and closer collaboration between teams!

  • How is Habitat impacting your development process?
  • How has Habitat improved collaboration between dev and ops?
  • As a developer, what are the things you love, or hate, about Habitat?

Getting Started

Application automation with Habitat is a relatively new practice and the tools available are quickly evolving. How are you getting started with Habitat? Have you started with core packages or are you building your own? You do not need to be an expert to help others get started. Your experiences getting started with Habitat are worth sharing, even if as cautionary tales. ChefConf is a great place to help fellow community members get started on the right foot.

  • What do you wish you knew when you first got started?
  • How are you helping people across your organization get started with application automation?
  • Which use cases are well-suited for getting started with application automation?

Other Tracks

The ChefConf CFP is open for the following tracks:

Share Your Story

Your story and experiences are worth sharing with the community. Help others learn and further your own knowledge through sharing. The ChefConf CFP is open now. Use some of the questions posed here to help form a talk proposal for the application automation track.

Submit your talk proposal now! The deadline is Wednesday, January 10, 2018 at 11:59 PM Pacific time.

The post ChefConf 2018 CFP: Application Automation Track appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Habitat container publishing to Amazon Elastic Container Registry (ECR)



----
Habitat container publishing to Amazon Elastic Container Registry (ECR)
// Chef Blog

Habitat Builder now supports publishing containers to Amazon Elastic Container Registry (ECR)! Habitat Builder enables users to programmatically build, export, and publish their applications and services to container registries.

Users of Habitat Builder can deliver both legacy and greenfield applications in an atomic, immutable, isolated artifact that is automatically rebuilt as upstream dependencies, libraries, and application code are updated. This Habitat artifact (*.hart) can then be automatically exported to a variety of formats, depending on the environment and job you are trying to do, including a Docker container.

Once you've set up your package to automatically export as a Docker container, you can integrate your Habitat Builder origin with a container registry, and automatically publish your application and services as a container to the registry or registries that best complement your workflow. Today, we are excited to announce the addition of Amazon ECR as a publishing location.

If you're new to Habitat Builder, you can fly through a quick 10 minute demo to set up a sample Node.js application, auto-building upon updates, exporting to a Docker container, and publishing those updates to a container registry here: https://www.habitat.sh/demo/

Once you've done that demo, adding Amazon ECR as a publishing destination is super simple. Go to your origin, select the Integrations tab, and click on the "Amazon Container Registry" option.

Amazon Container Registry

You will be given a form to add your Amazon ECR information to link your accounts.

Add Registry Account

And then you'll go through a simple setup procedure where you can select the GitHub repo where your Habitat plan.sh lives, decide if you want your Habitat packages to be listed as public or hidden as private, and choose to publish your containers to Amazon Container Registry and/or Dockerhub (let us know if there are more container registries you'd like us to add in the comments!). You can also tag your builds with specific tags if desired.

Select a GitHub repo

Once you've clicked the "Save Connection" button, your Habitat Builder-maintained application can start exporting and publishing to Amazon ECR! Click the "Build latest version" from your package's page to kick off a build immediately, or wait for an automatic rebuild to occur via committing to your application's Github repo's master branch.

Build latest version

Learn more about Habitat

The post Habitat container publishing to Amazon Elastic Container Registry (ECR) appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Zero to Continuous Automation on Bare Metal Using HPE and Chef



----
Zero to Continuous Automation on Bare Metal Using HPE and Chef
// Chef Blog

In our last blog post, we gave you an introduction and high-level overview of how Chef Automate and HPE OneView work together to enable cloud-like speed on bare metal hardware. In this post, we'll explore a bit deeper with technical details and code examples.

The Old Way

There are the three main requirements to run most modern applications. You need a network to serve up your application's traffic, compute resources to run your application, and storage to retain important data and content. Traditionally, separate teams handle each of these areas. If you want to bring up a brand new environment the process might look something like this:

  1. Open a support ticket request for a new environment.  
  2. Your request is approved at the next weekly change management meeting.
  3. The ticket is forwarded to each of the network, storage, and systems teams.
  4. The storage team has to create some new disk LUNs and make them available.
  5. The network team provides ports and IP addresses for your machines to use.
  6. The systems team installs the operating system and makes sure the network and storage are configured correctly.
  7. If any of #4, #5, or #6 are incorrect or incomplete, you must go back to that number until the problem is fixed.
  8. Two weeks later, the systems team hands over the environment and you are able to log on and begin working.

Two weeks! Some organizations take up to two months or more simply to deliver some virtual machines. These delays lead to frustrated users, who often create their own "shadow IT" environments using their company credit card and public cloud provider environments. This is a nightmare for organizations who care about security and cost controls. But who can really blame the poor developers? They simply want to get their work done on time. Moving everything into the cloud is usually not a viable option for legacy applications and workloads.

The New Way

HPE and Chef have partnered together to bring the world of Infrastructure as Code right down to the layer of bare metal. You can easily provision network, storage, and compute resources right in your own data center. Sensitive data stays secure and controlled within your own physical network, but you still gain the speed and efficiency that your users demand.

Let's say you need to stand up a new ethernet network for your application. In a traditional data center this would usually require logging onto a network device and running some commands to create the network and add all the correct routing data. With the OneView Chef cookbook it's a simple Chef resource like the example below:

oneview_ethernet_network 'Eth1' do    client my_client    data(      vlanId: 1001,   purpose: 'General',      smartLink: false,      privateNetwork: false    )  end

You're not just limited to network configurations. You can also add a storage pool like the example below:

oneview_storage_pool 'CPG_FC-AO' do   client my_client   storage_system 'ThreePAR7200-8147'  end

Network and storage wouldn't be complete without their friend compute. We've got you covered there too. This example brings up a new physical server in enclosure 1, bay 2:

oneview_server_profile 'ServerProfile2' do    client my_client    data(      description: 'Override Description',      boot: {        order: [],        manageBoot: true      }    )    server_profile_template 'ServerProfileTemplate1'    server_hardware 'Encl1, bay 2'  end

Chef recipes are easy to write and easy to read. Think of them as "executable documentation". You simply string together the building blocks, or resources, that are required to stand up each part of your application stack. Once you have your Chef recipe ready you can run it with the proper login credentials, then sit back and watch your infrastructure build itself. After your infrastructure is built you can use Chef Automate to view the build status and compliance status of each part of your infrastructure.

Image Credit: http://fredrikdesigns.com/projects/growly-bear-metal/

Bare Metal Deployment – a Fictional Case Study

Our story begins at Spacely Space Sprockets, the world's leading manufacturer of Space Sprockets. We'll be following along with the lead systems engineer George as he and his team begin their journey on the path of continuous automation.

The systems engineering team configure most of their machines using scripts and manual processes. As a result it can take several hours to deploy a new machine. Developers have complained about long lead times to get their dev environments set up properly, and sometimes the machines are delivered with the wrong settings or missing packages.

Our team has started a pilot project to speed up delivery times and gain better control over bare metal server deployments. Chef and the HPE OneView cookbook will be used to build and deploy their infrastructure and applications. The cookbook exposes all of the configuration settings of a Synergy rack via simple, declarative Chef resources. There are plenty of examples you can copy to configure your own devices.

Find the code samples here: https://github.com/HewlettPackard/oneview-chef/tree/master/examples

George used the OneView cookbook to whip up a Chef recipe to create fibre channel networks, an ethernet network and an enclosure group. You can see a copy of the entire recipe here. For the sake of brevity, we'll show the part of the recipe that stands up our blade server in the bottom rack of our enclosure, in bay 4:

oneview_server_profile 'Chef-Node-1' do   client my_client   server_hardware 'BOT-CN75150107, bay 4'   server_hardware_type 'SY 480 Gen9 CNA Only'   enclosure_group 'EnclosureGroup1'   server_profile_template 'RedHat 7.3'  end      oneview_server_hardware 'BOT-CN75150107, bay 4' do   client my_client   power_state 'on'  # action [ :set_power_state, :update_ilo_firmware ]   action :set_power_state  end

Note how you can even update firmware using Chef!

See how easy that was? You simply declare where you want the machine deployed and name a server profile template to build the machine with. The next resource powers on the machine for us after it has been built.

George runs this Chef recipe from a special infrastructure node that is used expressly for configuring the Synergy rack. It can be inside the rack itself, or external to it. As long as you have valid API credentials you can configure your HPE Synergy infrastructure. OneView Chef recipes are a little different from normal Chef recipes, because they are meant for configuring infrastructure and not operating system configs. Current Chef users will feel right at home because they already speak the language. New users will also find it easy to build and maintain their infrastructure using the Chef DSL.

Synergy_Oneview_Chef

Let's take a look at what happens when George ran his Chef recipe. Here's a screenshot of the Composer UI that shows the new server being built. Note that it was put exactly where the code said it should be located, on the bottom rack in bay number 4. With Chef you have fine-grained control over every part of the hardware and software.

Now you might be wondering how to configure the machine once it's up and running. Since it's a bare-bones Red Hat 7.3 OS, George still needs to install some packages and middleware before we can get the application running. In order to bootstrap the machine with Chef and get all of that set up correctly, he needs to add a simple bash script to the OS build plan. All of the Synergy and Image Streamer API endpoints can be configured using Chef. Our installation script goes toward the end of the build process as shown below:

Step number 7 will install Chef, bootstrap our node to the Chef Automate server, and then execute the run list, or list of instructions that Chef uses to build the machine. In this case we are standing up an instance of the company's ecommerce site. This is a Linux/Java/Tomcat/Apache stack with a MySQL database on the back end. Remember that building these servers used to take George and his team anywhere from a few hours to a couple of days. Now they are able to do it with Chef in less than twenty minutes, on bare metal hardware. What an improvement! The dev team is thrilled that they can have new environments set up so quickly. New hires are able to be productive on their very first day at work. Senior devs are no longer hoarding machines because they are afraid of long rebuild times. The sysadmins are happy because they are no longer wasting time fixing broken dev environments. Because a brand new environment is never more than twenty minutes away, it's often quicker to simply wipe and re-provision.

Now that we're using infrastructure as code to build our environments, let's take a look at the Chef Automate dashboard to get an overall health check:

converge status

See how there are two nodes? One is the helper node that builds and manages the Synergy rack, and the other node is the ecommerce server that we just built. All our Chef recipes are running successfully which means everything was built exactly to code. From this point forward the Chef client will run every 30 minutes on each node, ensuring that any configuration drift is caught and remediated quickly. No more worrying about sysadmins or developers changing settings by hand. Instead our syseng team is able to build everything from source.

But what about security and compliance? For that we head over to the Compliance tab of Chef Automate. Chef Automate is not just for building machines. You can also use it to scan your entire fleet for compliance and audit status. The compliance dashboard is powered by Chef's compliance-as-code tool, InSpec. InSpec is a language and framework for describing audit, security, compliance and QA requirements. You can use it to ensure that all your infrastructure is built correctly and stays compliant with internal and external security regulations.

First let's take a peek at the Synergy node which is passing all its tests. Below is the InSpec code that was used to make sure our application was deployed onto the correct hardware. InSpec is even easier to learn than Chef, so you can quickly develop rule sets for all your QA and audit needs:

We're also able to collect compliance data on our OS and applications. In the ecommerce-prod-1 report we see the results of a Linux security baseline scan. Sysadmins can take action on each item and remediate them according to the severity level.

Summary

This was a simple example of how you can leverage Chef to move faster and more efficiently in your own data center. If you have more complex use cases, Chef can scale to support hundreds of thousands of machines across multiple data centers. Join the ranks of the Chef community and learn more at: http://learn.chef.io

More about Chef and HPE

The post Zero to Continuous Automation on Bare Metal Using HPE and Chef appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Happy Birthday OWCA! Plus the Latest Chef Innovations for AWS Customers



----
Happy Birthday OWCA! Plus the Latest Chef Innovations for AWS Customers
// Chef Blog

Last year at AWS re:Invent, Chef was excited to join Amazon VP & CTO Werner Vogels to announce the availability of OpsWorks for Chef Automate (OWCA). As the first provider of configuration management through OpsWorks, Chef has a long standing commitment to the success of AWS customers. Today, I want to share some of our latest features and innovations for AWS as we kickoff re:Invent in Las Vegas.

This batch of announcements build on last year's release to provide even better AWS application development and management support across the entire app lifecycle. Along with support AWS announced earlier this month around the new OpsWorks update to Chef Automate 1.6, we are announcing native EC2 Container Registry support for Habitat Builder, and that Chef is available to US government customers through the AWS GovCloud private cloud.

Our long-time partnership with Amazon lets us deliver this latest round of features to AWS customers. It's been a year since OpsWorks for Chef Automate was announced as the first configuration management service offered by a partner to AWS customers. This early partnership gave us a great head start in maturing our service and offering new and enhanced capabilities to AWS customers.

Detect and Correct with OWCA

And we've used the last 12 months to deliver more innovation to AWS shops. The upgrade of OpsWorks for Chef Automate to Chef Automate 1.6 brings our detect-correct-automate capabilities to OWCA customers. This lets organizations run compliance and audits on their cloud and on-prem systems using both homegrown and publicly available compliance packages. And these scans are both managed and visualized in OWCA. This means that you can point OWCA at your servers and nodes, both in AWS as well as in your own data centers, to manage and quickly understand the compliance state of your systems.

Habitat Builder EC2 container registry

The improvements to Chef Automate I've described focus on the "Ops" side of the DevOps equation. The next announcement switches gears to how we support the success of developers in a cloud-native environment. Last month we delivered our Habitat Builder SaaS offering, which is the fastest path from code to Docker container. At re:Invent, we are announcing support for delivering Habitat packages directly into the EC2 container registry. Habitat Builder gives developers an easy way to take your code (currently Node.js, Ruby, Java) and automatically build fully packaged containers (including supervisor code) into a container registry.

With the addition of EC2 container registry, now developers can build for containers running on AWS simply by checking in new code to GitHub. Habitat's build service then automatically updates the build, puts the artifacts into the registry, and Habitat provides detailed information about the build history and state of the artifact for the operations team to use so they can deploy across any technology and infrastructure. Habitat is free and you can try it out at http://habitat.sh. The Habitat Builder EC2 container registry export support will be available at re:Invent.

Chef Automate available in the AWS GovCloud marketplace

And finally, we are excited to announce our participation in the Amazon public sector cloud programs. Chef's offerings will be available in the AWS GovCloud marketplace, so government entities and intelligence agencies using Amazon's cloud offerings can use Chef Automate inside their cloud instances. We are also partnering with AWS to deliver special training and other programs for our public sector customers.

Meet us at AWS re:Invent

If you're attending AWS re:Invent in Las Vegas, stop by and say hi to Chef! We are at the re:Invent exhibit hall in booth #224. You can get more details on these and other new goodies, and grab some fun stickers for your collection.

Learn more

The post Happy Birthday OWCA! Plus the Latest Chef Innovations for AWS Customers appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

A Preview of new Chef support for AWS customers



----
A Preview of new Chef support for AWS customers
// Chef Blog

Last year at re:Invent 2016, Amazon announced the availability of OpsWorks for Chef Automate (OWCA). Chef, available as a service to Amazon customers, opened up continuous automation to a whole new generation of customers creating cloud-native applications. This past year has been busy for Chef, as we have extended our support for AWS customers to bring the latest and greatest innovations for organizations who want to follow a detect-correct-automate pattern to best manage their application estate.

Today, we are announcing the results of this work in preparation for the AWS re:Invent conference later in November. These announcements include:

  • Bringing the compliance features of Chef Automate 1.6 to OWCA
  • Extending the new Habitat Builder support to native export containerized applications to EC2 container registry
  • Chef's participation in the Amazon Public Sector programs for US Government private cloud customers of AWS

These innovations deliver new value and capabilities to AWS customers across the entire application delivery, deployment, management and compliance lifecycle. And they continue to build on our commitment to delivering a variety of choices for implementing Chef that maps to a customer's needs and preferences. Whether you use Chef through OWCA, acquire it through the AWS Marketplace, or extend your existing Chef investment to AWS in a hybrid environment, you can have the benefits of continuous automation and compliance across all of your systems.

Here are some additional articles and resources you may find valuable:

Meet us at AWS re:Invent 2017

re:Invent 2017 is shaping up to be an exciting event for AWS customers and Chef is excited to be participating throughout the conference. Stop by our booth, one of our presentations, or say hi at the pub crawl. Check out what we have planned for the event, and I'm looking forward to seeing you in Las Vegas!

The post A Preview of new Chef support for AWS customers appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Tuesday, December 12, 2017

pfSense 2.4.2-RELEASE now available



----
pfSense 2.4.2-RELEASE now available
// Netgate Blog

We are excited to announce the release of pfSense® software version 2.4.2, now available for new installations and upgrades!


----

Read in my feedly


Sent from my iPhone

Application Detection on pfSense® Software



----
Application Detection on pfSense® Software
// Netgate Blog

Thanks to the Snort package and OpenAppID, pfSense is now application-aware.


----

Read in my feedly


Sent from my iPhone

Achieve DevOps Bliss in a Hybrid Cloud with Chef and CloudBolt



----
Achieve DevOps Bliss in a Hybrid Cloud with Chef and CloudBolt
// Chef Blog

What is CloudBolt?

Since you're reading this blog, we'll assume that you already know what Chef is, but you may not be as familiar with CloudBolt. CloudBolt is a hybrid cloud management platform that provides an intuitive interface for all IT requests. Users who need to request and manage resources can use the CloudBolt UI to get the resources they want, when they want them, and they are deployed in an automated fashion in accordance with the organization's policies and best practices.

The integration between CloudBolt and Chef (which was released in 2013) makes sense because CloudBolt acts as the self-service catalog with business logic and policies, and Chef acts as the configuration manager automating operational logic and policies. CloudBolt is great at providing a simple UI to drive complex, orchestrated builds, and Chef is great at managing the configuration.

What does DevOps Bliss in a Hybrid Cloud Look Like?

A common yet lofty goal for enterprise IT organizations is to provide a self-service, hybrid cloud interface to end users that will enable them to create and manage VMs. At the same time, IT staff must ensure the that the same systems are built consistently, according to the organization's standards and use DevOps best practices such as modeling infrastructure as code.

Chef and CloudBolt, when used in tandem, can achieve this state of DevOps bliss. Together, they empower users with self-service, freeing IT staff from the tedious process of manually building out systems, and instead enabling them to focus on more strategic, higher value work.

DevOps is a good thing when a few people in the organization can take advantage of it, but it can only reach its full potential when it is open to all users of IT resources.

Prerequisites

The following assumes you have both Chef and CloudBolt installed in your environment, and that you have administrative access to both of them.

Integrating Chef and CloudBolt

CloudBolt comes out-of-the-box with built in integration with Chef. From the CloudBolt user interface:

  • Browse to Admin > Configuration Managers, click the "Add a configuration manager" button and choose Chef.
  • Complete the form, providing information on your Chef server.
  • A few additional steps are needed, including installing the proper version of the Chef development kit on the CloudBolt server. The CloudBolt UI contains a link to complete instructions, or you can follow this one: http://docs.cloudbolt.io/configuration-managers/chef/index.html.

Creating a Chef config mgr in CloudBoltOnce basic communication is established from CloudBolt to Chef, you can import Chef roles and cookbooks into CloudBolt from the corresponding tabs on the page for that Chef configuration manager. After that is done you can choose which environments in CloudBolt should be Chef enabled. New VMs built in these environments will automatically have a Chef agent bootstrapped onto the servers, including any recipes or roles that you specify should be applied to all new VMs.

managing Chef from CloudBolt , importing cookbooksThis setup is consistent whether you are building in your private data center with a virtualization technology like VMware or Nutanix AHV, or in one of the eight public clouds that CloudBolt supports.

Now you can test new server builds and use the job details page to see where the Chef agent is installed and configured, and the output of the installation of the specified roles and recipes.

a server build in CB that installed the Chef agentBenefits

Ease of use is a key benefit of CloudBolt. As a result, a broad range of users can perform on demand server and application stack builds. In addition, this larger group of people will now be able to take advantage of the power of Chef. The utility of your Chef investment is effectively multiplied by the number of users that use CloudBolt to build and manage environments.

  • IT shops that use both CloudBolt and Chef build environments with confidence, and know that the resulting systems will be configured consistently across dev, QA, and prod environments, across on-prem and cloud deployments.
  • One joint user of Chef and CloudBolt created a blueprint for deploying Hadoop clusters. With just a few clicks they are now able to deploy scalable 50-node Hadoop clusters in the public cloud, using CloudBolt as the user interface and orchestrator, and Chef to do the heavy lifting of the configuration of each of the nodes and installation of the appropriate Hadoop software.
  • Since CloudBolt supports multiple Chef servers, Chef shops can more easily use several Chef servers, possibly of different versions, all from one user interface.

In short: DevOps bliss is achieved in a hybrid cloud with Chef and CloudBolt.

Conclusion

Watch for future CloudBolt contributed blog posts (we have one on the way about the aforementioned Hadoop blueprint), or let us know what related topics you would like to see posts on.

Visit cloudbolt.io or schedule a demo to learn more.

The post Achieve DevOps Bliss in a Hybrid Cloud with Chef and CloudBolt appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Compliance with InSpec: Any Node. Any Time. Anywhere.



----
Compliance with InSpec: Any Node. Any Time. Anywhere.
// Chef Blog

InSpec is an agentless compliance scanner, which means that you can use InSpec to perform compliance scans of configuration settings without installing anything, changing configuration settings, or otherwise affecting the state of the machine you are scanning. Compliance scanning is important for many reasons, among which are the ability to assess formal regulatory compliance, diagnosing emerging or recurring security concerns, and defining compliance standards that suit your unique systems and needs. InSpec gives you near-immediate insight into your system. You can combine the power of InSpec with the flexibility of working from your Android phone, which lets you apply your compliance tools to any node, at any time, and from anywhere.

Assess the compliance of any machine

What's often missed in the discussion of InSpec as a compliance tool is that you can use it to assess the compliance of any machine, not just nodes that are under management by Chef. While you can execute InSpec scans as part of a chef client run, it's equally effective at scanning systems not under active configuration management. InSpec may also be used to scan Docker containers, virtual machines, on-site hardware, as well as systems that are managed by Ansible or Puppet.

From a practical perspective, the ability to scan any system represents the reality of large operations with many different groups that may have many different methods of configuration management and deployment — or none at all. While–of course–we think you should use Chef products for all of your configuration and deployment needs, we are also sufficiently realistic to realize that you have compliance needs regardless of whatever decisions you have already made.

Scan an Ansible tower using InSpec on Android

Using InSpec for compliance scanning is as simple as downloading InSpec, selecting a compliance profile from Chef Supermarket, and running it against a machine. To scan a node, all you need is InSpec, the address of a node, and the key for the node. In the following video, I scanned an Ansible tower running on the AWS CloudFoundation, using InSpec installed Termux app on my new-in-2015 Samsung Galaxy Note 5 and pulled the ssh-baseline profile available from the Chef Supermarket.

If you're interested in trying this out, you'll need an Android phone, the free Termux App, access to a node somewhere, and your ssh key.

Setting up Termux on Android is exactly like setting up any other computer, it takes some tweaking. For running InSpec, you'll need to set Termux up for Ruby develop, enable it to compile, install InSpec, and make Git available. Your installation may vary, but for my phone, I needed:

1) Set up for Ruby development:

apt-get install ruby  apt-get install ruby-dev

2) Set up to compile:

apt-get install make  apt-get install libffi  apt-get install libffi-dev  apt-get install chomp

3) Set up and install InSpec:

gem install bundler  bundle install  gem install inspec

4) Make Git available:

apt-get install openssl  gem install git

To run the scan, the command syntax is:

inspec supermarket exec dev-sec/ssh-baseline -t ssh://ipaddress -i mykey.pem

For example:

inspec supermarket exec dev-sec/ssh-baseline -t ssh://ec2-user:ec2-user@ec2-34-211-195-159.us-west-2.compute.amazonaws.com -i mykey.pem

Try InSpec

See how InSpec can help you quickly identify potential compliance and security issues on your infrastructure. Try InSpec using this step-by-step quick start module on Learn Chef Rally.

The post Compliance with InSpec: Any Node. Any Time. Anywhere. appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

ChefConf 2018 CFP: Compliance Automation Track



----
ChefConf 2018 CFP: Compliance Automation Track
// Chef Blog

ChefConf is the largest community reunion and educational event for teams on the journey to becoming fast, efficient, and innovative software-driven organizations. In other words, you and your team!

ChefConf 2018 will take place May 23-26 in Chicago, Illinois and we want you to present! The ChefConf call for presenters (CFP) is now openOne of the tracks you might consider proposing a session for is the Compliance Automation track.

Compliance Automation

Every system in your environment is subject to some sort of compliance controls. Some of those controls, such as PCI-DSS, HIPAA, and GDPR, may be prescribed by an external regulatory body. Other controls may be prescribed by teams within your organization, such as the InfoSec team. There may even be controls that you do not think of as "compliance", such as a control or policy that states agents should receive updates daily. Defining, modeling, and managing these controls as code is the only way to efficiently and continuously audit and validate that standards are being met.

Assessing Current State

One of the first steps to automating an environment is getting a handle on the current state of that environment. InSpec is a human-readable language for specifying compliance, security and other policy requirements. Capture your policy in InSpec tests and run those tests against remote nodes to easily assess whether those nodes are configured properly. Running these tests across the entire fleet is as easy as adding the audit cookbook to your nodes' run lists. The chef-client will send InSpec test results as well as lots of information about the node (such as ohai attributes) off to Chef Automate. From there, you will be able to quickly detect and assess which nodes require intervention or remediation and which are compliant with the prescribed policies.

Presenting about how InSpec helped you with your compliance needs can be extremely powerful to those just starting their compliance journey. For example:

  • How are you running InSpec tests against your fleet? What inconsistencies have you discovered?
  • Are you continuously checking your compliance status with the audit cookbook?
  • How has InSpec impacted your mean time-to-detect issues?

Compliance Profiles

Chef Automate ships with over 80 compliance profiles, many based on the Center for Internet Security (CIS) Benchmarks. The community is sharing compliance profiles on the Chef Supermarket. You may be writing your own compliance profiles to capture the unique requirements for your business and infrastructure. As a community, the practices for managing compliance profiles are still emerging. For example, profile inheritance makes it easy to share profiles across your fleet and even across the community. Profile attributes allow authors to abstract the data associated with a profile. Metadata in controls, such as impact, tags, and external references provide additional context for deciding what to do when there is a failure.

  • How is your team collaborating on profile development? Have you defined any practices around repository layout, profiles per node, required metadata, etc.?
  • Which profiles are you using from Chef Automate or the Supermarket? How are you sharing custom profiles?
  • Are profiles enabling better collaboration between various parts of your organization? E.g., InfoSec and Operations, Development and Security.

Custom InSpec Resources

InSpec ships with a myriad of resources for asserting the state of your infrastructure. When these resources aren't enough, or you want to share a resource with your colleagues for use in multiple profiles, you may find it necessary to create custom resources. These resources may cover components not available with the standard resources or may be a way of creating more clear compliance profiles.

  • What custom resources have you developed?
  • How do you write a custom resource?
  • What are some of the pitfalls and benefits of writing custom resources?

Local Development

InSpec is certainly used to model and assess compliance controls. However, it also leads a double life as a very powerful framework for modeling integration tests for infrastructure code. Tools like Test Kitchen make it easy to spin-up local infrastructure for testing and validating the results of executing that code. Kitchen-inspec is a plugin that executes InSpec tests during the validation phase of the Test Kitchen lifecycle. This integration testing is done before any code changes are submitted to the production environment. Of course, there are other frameworks that allow for similar integration testing, such as pester, BATS, or Serverspec.

  • How are you running integration tests for your infrastructure code?
  • Are you using compliance profiles during your integration testing?
  • How do your integration tests compare to your compliance profiles?
  • Why did you migrate to InSpec for integration tests?

To the cloud! Beyond machine configurations

Assessing and asserting the state of nodes in your fleet is important but perhaps you also have policies that govern how you configure and consume the cloud. These policies may govern how to manage things like security groups, user authentication, and resource groups. In addition to cloud concerns, you may have policies that describe the way applications should be configured. Do you have policies that cover the configuration of your database servers, application servers, or web servers? InSpec is one way to capture these policies as code and regularly assess the state of your cloud and applications.

  • What security policies have you put in place to manage cloud usage?
  • How are you visualizing the state of your cloud compliance controls?
  • What application configurations are you validating with InSpec?

Getting Started

Automating compliance is a relatively new practice and the tools available are quickly evolving. How are you getting started with compliance automation? Have you started with out-of-the-box profiles or custom profiles? Simple integration tests or full compliance profiles? You do not need to be an expert to help others get started. Your experiences getting started with compliance automation are worth sharing, even if as cautionary tales. ChefConf is a great place to help fellow community members get started on the right foot.

  • What do you wish you knew when you first got started?
  • How are you helping people across your organization get started with compliance automation?
  • Which use cases are well-suited for getting started with compliance automation?

DevSecOps

DevOps has always been a cultural and professional movement, not a tool. Of course, there are tools, like git and Chef, that help advance the practices of the movement. Tool choices reinforce and amplify the culture we have. Compliance automation allows us to welcome more people into the DevOps community. Automation increases speed and efficiency while simultaneously decreasing risk in our environments. Sometimes people approach this automation with a bit of skepticism. The role of information security can be fundamentally changed by embracing the collaborative nature of DevOps and the automation of security practices.

  • What challenges or successes have you had welcoming security professionals to your DevOps practices?
  • How is the role of security changing in your organization?
  • How have your practices for handling zero-day vulnerabilities changed?

Other Tracks

The ChefConf CFP is open for the following tracks:

  • Infrastructure Automation
  • Compliance Automation
  • Application Automation
  • People, Process, and Team
  • Delivering Delight
  • Chaos Engineering
  • Don't Label Me!

Share Your Story

Your story and experiences are worth sharing with the community. Help others learn and further your own knowledge through sharing. The ChefConf CFP is open now. Use some of the questions posed here to help form a talk proposal for the compliance automation track.

Submit your talk proposal now! The deadline is Wednesday, January 10, 2018 at 11:59 PM Pacific time.

The post ChefConf 2018 CFP: Compliance Automation Track appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Modernizing Deployments with Chef and Habitat at Media Temple



----
Modernizing Deployments with Chef and Habitat at Media Temple
// Chef Blog

Media Temple has been a leading web hosting provider for nearly 20 years. As a Software Developer there, George Marshall is responsible for focusing on products that help their customers realize their goals in taking their ideas online. We recently interviewed George about how his team is using Chef and Habitat. You can watch a recording of the interview below. George told us how crucial automation has become in day to day application deployments, and in particular he shared his excitement about working with Habitat.

Big Ideas, Small Shippable

"…since you're only pulling in what you need, you're able to have a very, very small deliverable to your end production."

A theme George kept coming back to is the value of having very small shippable components. In traditional environments, on bare metal servers or VMs, typically we'll start with an operating system image with its own pre-installed software and libraries, as well as organization-wide software baselines, onto which additional dependencies will need to be installed before we're finally able to deploy our application.

As systems have grown more complex over time, even so-called "minimal" installs can have a lot of moving parts. What's worse, it can be difficult to discern which elements are tied to the function of the underlying operating system, which are tied to components of our application, and which aren't in use by anything at all! This can cause a variety of challenges, from making it difficult to predict the impact of software upgrades, to inconsistencies between environments causing much-dreaded "but it worked on my machine!" issues. In any case, these concerns are apt to slow our development pace to a crawl.

With habitat, rather than starting with an operating system and building up to an application environment, we start with an application, and pull in any required dependencies to build an artifact that contains everything it needs to run, and nothing it doesn't. Whether we run those artifacts on VMs, bare metal machines, or export them to containers, they'll run consistently regardless of what is or isn't installed on the host OS. Per George, this in turn can, "…make new platforms a bit less challenging by being a little bit more agnostic… and you have a little bit more flexibility in how you want to take your deployments"

Builder: Dependencies Done Right

"What you're shipping is gonna be a better product."

As we talked, George was particularly enthusiastic about Habitat's Builder feature, and in particular its dependent build functionality. As mentioned previously, evaluating the impact of package upgrades has historically been a significant source of pain, as each upgrade might have multiple services that depend on it, and if applied inconsistently, conflicts can be difficult to diagnose and repair. Habitat plans allow you to define dependencies on other projects and configure automatic rebuilds whenever any upstream components are updated. In George's view, "The big benefit of that is the integration testing you can get out of that. And being able to make sure every component in the stack is going to work with each other and be copacetic."

In other words, since Habitat builds its artifacts with everything they need to run, these automatic builds will produce ready-to-test versions of our application without us needing to take any manual steps to parse the dependency tree or stage an upgrade.

Learn More

The post Modernizing Deployments with Chef and Habitat at Media Temple appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Application Automation with Habitat and Kubernetes



----
Application Automation with Habitat and Kubernetes
// Chef Blog

Habitat, Chef's next generation application automation framework, provides a powerful suite of integrated capabilities in service of seamlessly and continuously building, deploying, and running your application and the services that need to run to support and scale your application across a distributed infrastructure.

With Kubecon kicking off today in Austin, we are super excited to highlight a bunch of product capabilities built in partnership with our friends at Kinvolk that blend the edges of Habitat and Kubernetes to unify these two powerful tools and ecosystems into one awesome application delivery experience.

Kubernetes is a platform that runs containerized applications and supports container scheduling, orchestration, and service discovery. It allows you to abstract a datacenter of computers into well managed compute resources to power your workloads as you continuously update and scale them. When you use Habitat with Kubernetes, Habitat manages the application building, packaging, and deployment, and then Kubernetes manages the infrastructure, container scheduling, orchestration, and service discovery.

The end goal of using Habitat and Kubernetes together is to power your developers to be able to continuously build and deploy application artifacts using Habitat's Builder automation. These artifacts will automatically deploy to their Kubernetes staging clusters, and when ready, developers can promote application updates to the production clusters simply by running hab pkg promote --production my/software/1.2.3/20171201134501.

Towards this, today we are announcing some major updates to our Habitat Operator for Kubernetes, including the ability to promote application artifacts between clusters as part of your  continuous delivery practice.

We have fully supported Kubernetes packages hosted at Habitat Builder, so you can build and deploy your Kubernetes clusters and update them as Kubernetes is updated, using Habitat's Builder capabilities. This lets you take advantage of Habitat's immutable build guarantees of reproducibility, isolation, portability, and atomicity to run your cluster wherever you need to.

We are also introducing a Habitat Kubernetes exporter to add to our exporter options. This means you can export all of your Habitat built artifacts using `hab pkg export kubernetes` and create a docker container with a Kubernetes manifest that can be deployed to a Kubernetes cluster running the Habitat Operator.

Read more on how to get started with Habitat and Kubernetes

Give it a try and join the community

The post Application Automation with Habitat and Kubernetes appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Integrating Chef into your VSTS Builds



----
Integrating Chef into your VSTS Builds
// Chef Blog

Here at Chef we have been working with Microsoft to create integrations for Visual Studio Team Services (VSTS).

VSTS is an invaluable and versatile tool to bring together pipelines within your environments. The pipelines are made up of tasks. These tasks are either core tasks built into VSTS (e.g. created by Microsoft) or available in the VSTS Marketplace/. It is in the marketplace that you will find the Chef Integration extension.

In a recent webinar we hosted on November 17, 2017, Eugene Chuvyrov, Senior Software Engineer at Microsoft, talks about the importance of VSTS, what it is capable of, and how it is part of the larger DevOps toolset. VSTS has a number of plugins that can be installed into your account that allow it to perform a myriad of tasks. A recording of the webinar is available at the end of this post.

VSTS Plugins

By installing the Chef extension into your VSTS account you get the following tasks:

It should be noted that although these tasks have been assigned to different build phases, they do not have to be used in those phases. Indeed the live demonstration in the webinar shows how the tasks can be used to create a Chef Cookbook pipeline. To do this the Upload Cookbook to Chef Server task is used in the release phase.

As mentioned, the live demonstration shows how VSTS can be used as a source repository for your cookbook(s), using Git, to create a build and release pipeline. The workflow for this is shown below:

VSTS Cookbook Pipeline

As can be seen from the figure above there are a number of build tasks that can be triggered either manually, by a code checkin or on a schedule (think nightly builds). This will then create a cookbook artifact which is placed in a drop location so that the Release process can get hold of it.

A Release can be created on a trigger, e.g. when a new artifact has been published. This is not the same as performing a release, it just creates the release from which deployments are performed. Of course it is possible to have an automatic deployment when a release is created, but this would be a trigger on the release enviornment itself.

We are continually developing and enhancing the VSTS extension. Things that are in the backlog at the moment are:

  • Ensure the extension works on Windows
  • Create an 'Install ChefDK' task
  • Create tasks to perform linting operations so that it is not a command line task
  • Create task to run Test Kitchen

To see the tasks that are being worked on please refer to the VSTS Chef Extension Issues page.

For information on how to use the tasks please refer to the VSTS Chef Extension Wiki.

For a step by step guide of what is performed in the webinar please refer to the blog post Chef Cookbook Pipeline with VSTS. Please note that this was created before the 'Publish to Supermarket' task was available.

If you would like to see the webinar again or you missed it the first time around please find it below.

The post Integrating Chef into your VSTS Builds appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Transforming IT Operations with Chef and Schuberg Philis



----
Transforming IT Operations with Chef and Schuberg Philis
// Chef Blog

Schuberg Philis is an IT services company that helps their customers realize their digital transformation goals. Joris van Lieshout is a Mission Critical Engineer at Schuberg Philis, and is responsible for designing and supporting infrastructure for a wide array of customers. This past spring we had a conversation with Joris to discuss how his team is using Chef to help his clients achieve unprecedented velocity and creativity while maintaining the environmental security their customers depend on.

Achieving Velocity in Regulated Spaces

"We achieve a more secure environment by using a tool like Chef because we create reliable environments where change is not done manually, but automated."

With the growing size and complexity of modern environments, organizations often find it difficult to release updates with the velocity they'd like due to the potential for changes to bring with them performance or security concerns. Nowhere is this more keenly felt than in industries subject to regulatory requirements, where the stakes are high, and a hastily prepared deploy could have a profound impact on their compliance audits. Because of this, Joris told us, many of his customers were limited to at best quarterly, and in some cases even annual release cycles. Much of this was due to the fact that existing practices for deploying complex applications involved many manual steps, each adding a layer of risk and necessitating a slow, deliberate release cadence.

With Chef, the Schuberg Philis team is able to help their customers automate the configuration and deployment of their applications, allowing them to deploy with greater ease and consistency, which in turn makes validating their compliance that much easier. This allows for more frequent releases, even in highly regulated spaces — as Joris told us, "We have quite a few banking customers where we can do weekly or every two week production releases of new features…" In other words, automation brought with it an increase in speed and efficiency, while also reducing the risks inherent to each deploy.

Automate the Backlog, and Drive New Innovation

"Everything you do twice should be automated…"

An often overlooked cost of managing large environments manually is that even ostensibly trivial changes can be time consuming to implement. Add to that the typical issues or maintenance that are a part of day to day operations, and the result is often IT organizations that are struggling to keep up with backlogs of requests. With each new task a team automates, there's one less problem that needs to be re-solved in the future. In turn, this makes recurring issues easier to address, requests easier to fulfill, and operations teams able to spend more time planning and less time reacting. Or as Joris observed in a number of his customers, "…instead of being the department that says no to the business because they're too busy… [IT Operations is] now a platform for business to experiment and create new functionality."

Next Steps

The post Transforming IT Operations with Chef and Schuberg Philis appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone