Tuesday, December 12, 2017

pfSense 2.4.2-RELEASE now available



----
pfSense 2.4.2-RELEASE now available
// Netgate Blog

We are excited to announce the release of pfSense® software version 2.4.2, now available for new installations and upgrades!


----

Read in my feedly


Sent from my iPhone

Application Detection on pfSense® Software



----
Application Detection on pfSense® Software
// Netgate Blog

Thanks to the Snort package and OpenAppID, pfSense is now application-aware.


----

Read in my feedly


Sent from my iPhone

Achieve DevOps Bliss in a Hybrid Cloud with Chef and CloudBolt



----
Achieve DevOps Bliss in a Hybrid Cloud with Chef and CloudBolt
// Chef Blog

What is CloudBolt?

Since you're reading this blog, we'll assume that you already know what Chef is, but you may not be as familiar with CloudBolt. CloudBolt is a hybrid cloud management platform that provides an intuitive interface for all IT requests. Users who need to request and manage resources can use the CloudBolt UI to get the resources they want, when they want them, and they are deployed in an automated fashion in accordance with the organization's policies and best practices.

The integration between CloudBolt and Chef (which was released in 2013) makes sense because CloudBolt acts as the self-service catalog with business logic and policies, and Chef acts as the configuration manager automating operational logic and policies. CloudBolt is great at providing a simple UI to drive complex, orchestrated builds, and Chef is great at managing the configuration.

What does DevOps Bliss in a Hybrid Cloud Look Like?

A common yet lofty goal for enterprise IT organizations is to provide a self-service, hybrid cloud interface to end users that will enable them to create and manage VMs. At the same time, IT staff must ensure the that the same systems are built consistently, according to the organization's standards and use DevOps best practices such as modeling infrastructure as code.

Chef and CloudBolt, when used in tandem, can achieve this state of DevOps bliss. Together, they empower users with self-service, freeing IT staff from the tedious process of manually building out systems, and instead enabling them to focus on more strategic, higher value work.

DevOps is a good thing when a few people in the organization can take advantage of it, but it can only reach its full potential when it is open to all users of IT resources.

Prerequisites

The following assumes you have both Chef and CloudBolt installed in your environment, and that you have administrative access to both of them.

Integrating Chef and CloudBolt

CloudBolt comes out-of-the-box with built in integration with Chef. From the CloudBolt user interface:

  • Browse to Admin > Configuration Managers, click the "Add a configuration manager" button and choose Chef.
  • Complete the form, providing information on your Chef server.
  • A few additional steps are needed, including installing the proper version of the Chef development kit on the CloudBolt server. The CloudBolt UI contains a link to complete instructions, or you can follow this one: http://docs.cloudbolt.io/configuration-managers/chef/index.html.

Creating a Chef config mgr in CloudBoltOnce basic communication is established from CloudBolt to Chef, you can import Chef roles and cookbooks into CloudBolt from the corresponding tabs on the page for that Chef configuration manager. After that is done you can choose which environments in CloudBolt should be Chef enabled. New VMs built in these environments will automatically have a Chef agent bootstrapped onto the servers, including any recipes or roles that you specify should be applied to all new VMs.

managing Chef from CloudBolt , importing cookbooksThis setup is consistent whether you are building in your private data center with a virtualization technology like VMware or Nutanix AHV, or in one of the eight public clouds that CloudBolt supports.

Now you can test new server builds and use the job details page to see where the Chef agent is installed and configured, and the output of the installation of the specified roles and recipes.

a server build in CB that installed the Chef agentBenefits

Ease of use is a key benefit of CloudBolt. As a result, a broad range of users can perform on demand server and application stack builds. In addition, this larger group of people will now be able to take advantage of the power of Chef. The utility of your Chef investment is effectively multiplied by the number of users that use CloudBolt to build and manage environments.

  • IT shops that use both CloudBolt and Chef build environments with confidence, and know that the resulting systems will be configured consistently across dev, QA, and prod environments, across on-prem and cloud deployments.
  • One joint user of Chef and CloudBolt created a blueprint for deploying Hadoop clusters. With just a few clicks they are now able to deploy scalable 50-node Hadoop clusters in the public cloud, using CloudBolt as the user interface and orchestrator, and Chef to do the heavy lifting of the configuration of each of the nodes and installation of the appropriate Hadoop software.
  • Since CloudBolt supports multiple Chef servers, Chef shops can more easily use several Chef servers, possibly of different versions, all from one user interface.

In short: DevOps bliss is achieved in a hybrid cloud with Chef and CloudBolt.

Conclusion

Watch for future CloudBolt contributed blog posts (we have one on the way about the aforementioned Hadoop blueprint), or let us know what related topics you would like to see posts on.

Visit cloudbolt.io or schedule a demo to learn more.

The post Achieve DevOps Bliss in a Hybrid Cloud with Chef and CloudBolt appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Compliance with InSpec: Any Node. Any Time. Anywhere.



----
Compliance with InSpec: Any Node. Any Time. Anywhere.
// Chef Blog

InSpec is an agentless compliance scanner, which means that you can use InSpec to perform compliance scans of configuration settings without installing anything, changing configuration settings, or otherwise affecting the state of the machine you are scanning. Compliance scanning is important for many reasons, among which are the ability to assess formal regulatory compliance, diagnosing emerging or recurring security concerns, and defining compliance standards that suit your unique systems and needs. InSpec gives you near-immediate insight into your system. You can combine the power of InSpec with the flexibility of working from your Android phone, which lets you apply your compliance tools to any node, at any time, and from anywhere.

Assess the compliance of any machine

What's often missed in the discussion of InSpec as a compliance tool is that you can use it to assess the compliance of any machine, not just nodes that are under management by Chef. While you can execute InSpec scans as part of a chef client run, it's equally effective at scanning systems not under active configuration management. InSpec may also be used to scan Docker containers, virtual machines, on-site hardware, as well as systems that are managed by Ansible or Puppet.

From a practical perspective, the ability to scan any system represents the reality of large operations with many different groups that may have many different methods of configuration management and deployment — or none at all. While–of course–we think you should use Chef products for all of your configuration and deployment needs, we are also sufficiently realistic to realize that you have compliance needs regardless of whatever decisions you have already made.

Scan an Ansible tower using InSpec on Android

Using InSpec for compliance scanning is as simple as downloading InSpec, selecting a compliance profile from Chef Supermarket, and running it against a machine. To scan a node, all you need is InSpec, the address of a node, and the key for the node. In the following video, I scanned an Ansible tower running on the AWS CloudFoundation, using InSpec installed Termux app on my new-in-2015 Samsung Galaxy Note 5 and pulled the ssh-baseline profile available from the Chef Supermarket.

If you're interested in trying this out, you'll need an Android phone, the free Termux App, access to a node somewhere, and your ssh key.

Setting up Termux on Android is exactly like setting up any other computer, it takes some tweaking. For running InSpec, you'll need to set Termux up for Ruby develop, enable it to compile, install InSpec, and make Git available. Your installation may vary, but for my phone, I needed:

1) Set up for Ruby development:

apt-get install ruby  apt-get install ruby-dev

2) Set up to compile:

apt-get install make  apt-get install libffi  apt-get install libffi-dev  apt-get install chomp

3) Set up and install InSpec:

gem install bundler  bundle install  gem install inspec

4) Make Git available:

apt-get install openssl  gem install git

To run the scan, the command syntax is:

inspec supermarket exec dev-sec/ssh-baseline -t ssh://ipaddress -i mykey.pem

For example:

inspec supermarket exec dev-sec/ssh-baseline -t ssh://ec2-user:ec2-user@ec2-34-211-195-159.us-west-2.compute.amazonaws.com -i mykey.pem

Try InSpec

See how InSpec can help you quickly identify potential compliance and security issues on your infrastructure. Try InSpec using this step-by-step quick start module on Learn Chef Rally.

The post Compliance with InSpec: Any Node. Any Time. Anywhere. appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

ChefConf 2018 CFP: Compliance Automation Track



----
ChefConf 2018 CFP: Compliance Automation Track
// Chef Blog

ChefConf is the largest community reunion and educational event for teams on the journey to becoming fast, efficient, and innovative software-driven organizations. In other words, you and your team!

ChefConf 2018 will take place May 23-26 in Chicago, Illinois and we want you to present! The ChefConf call for presenters (CFP) is now openOne of the tracks you might consider proposing a session for is the Compliance Automation track.

Compliance Automation

Every system in your environment is subject to some sort of compliance controls. Some of those controls, such as PCI-DSS, HIPAA, and GDPR, may be prescribed by an external regulatory body. Other controls may be prescribed by teams within your organization, such as the InfoSec team. There may even be controls that you do not think of as "compliance", such as a control or policy that states agents should receive updates daily. Defining, modeling, and managing these controls as code is the only way to efficiently and continuously audit and validate that standards are being met.

Assessing Current State

One of the first steps to automating an environment is getting a handle on the current state of that environment. InSpec is a human-readable language for specifying compliance, security and other policy requirements. Capture your policy in InSpec tests and run those tests against remote nodes to easily assess whether those nodes are configured properly. Running these tests across the entire fleet is as easy as adding the audit cookbook to your nodes' run lists. The chef-client will send InSpec test results as well as lots of information about the node (such as ohai attributes) off to Chef Automate. From there, you will be able to quickly detect and assess which nodes require intervention or remediation and which are compliant with the prescribed policies.

Presenting about how InSpec helped you with your compliance needs can be extremely powerful to those just starting their compliance journey. For example:

  • How are you running InSpec tests against your fleet? What inconsistencies have you discovered?
  • Are you continuously checking your compliance status with the audit cookbook?
  • How has InSpec impacted your mean time-to-detect issues?

Compliance Profiles

Chef Automate ships with over 80 compliance profiles, many based on the Center for Internet Security (CIS) Benchmarks. The community is sharing compliance profiles on the Chef Supermarket. You may be writing your own compliance profiles to capture the unique requirements for your business and infrastructure. As a community, the practices for managing compliance profiles are still emerging. For example, profile inheritance makes it easy to share profiles across your fleet and even across the community. Profile attributes allow authors to abstract the data associated with a profile. Metadata in controls, such as impact, tags, and external references provide additional context for deciding what to do when there is a failure.

  • How is your team collaborating on profile development? Have you defined any practices around repository layout, profiles per node, required metadata, etc.?
  • Which profiles are you using from Chef Automate or the Supermarket? How are you sharing custom profiles?
  • Are profiles enabling better collaboration between various parts of your organization? E.g., InfoSec and Operations, Development and Security.

Custom InSpec Resources

InSpec ships with a myriad of resources for asserting the state of your infrastructure. When these resources aren't enough, or you want to share a resource with your colleagues for use in multiple profiles, you may find it necessary to create custom resources. These resources may cover components not available with the standard resources or may be a way of creating more clear compliance profiles.

  • What custom resources have you developed?
  • How do you write a custom resource?
  • What are some of the pitfalls and benefits of writing custom resources?

Local Development

InSpec is certainly used to model and assess compliance controls. However, it also leads a double life as a very powerful framework for modeling integration tests for infrastructure code. Tools like Test Kitchen make it easy to spin-up local infrastructure for testing and validating the results of executing that code. Kitchen-inspec is a plugin that executes InSpec tests during the validation phase of the Test Kitchen lifecycle. This integration testing is done before any code changes are submitted to the production environment. Of course, there are other frameworks that allow for similar integration testing, such as pester, BATS, or Serverspec.

  • How are you running integration tests for your infrastructure code?
  • Are you using compliance profiles during your integration testing?
  • How do your integration tests compare to your compliance profiles?
  • Why did you migrate to InSpec for integration tests?

To the cloud! Beyond machine configurations

Assessing and asserting the state of nodes in your fleet is important but perhaps you also have policies that govern how you configure and consume the cloud. These policies may govern how to manage things like security groups, user authentication, and resource groups. In addition to cloud concerns, you may have policies that describe the way applications should be configured. Do you have policies that cover the configuration of your database servers, application servers, or web servers? InSpec is one way to capture these policies as code and regularly assess the state of your cloud and applications.

  • What security policies have you put in place to manage cloud usage?
  • How are you visualizing the state of your cloud compliance controls?
  • What application configurations are you validating with InSpec?

Getting Started

Automating compliance is a relatively new practice and the tools available are quickly evolving. How are you getting started with compliance automation? Have you started with out-of-the-box profiles or custom profiles? Simple integration tests or full compliance profiles? You do not need to be an expert to help others get started. Your experiences getting started with compliance automation are worth sharing, even if as cautionary tales. ChefConf is a great place to help fellow community members get started on the right foot.

  • What do you wish you knew when you first got started?
  • How are you helping people across your organization get started with compliance automation?
  • Which use cases are well-suited for getting started with compliance automation?

DevSecOps

DevOps has always been a cultural and professional movement, not a tool. Of course, there are tools, like git and Chef, that help advance the practices of the movement. Tool choices reinforce and amplify the culture we have. Compliance automation allows us to welcome more people into the DevOps community. Automation increases speed and efficiency while simultaneously decreasing risk in our environments. Sometimes people approach this automation with a bit of skepticism. The role of information security can be fundamentally changed by embracing the collaborative nature of DevOps and the automation of security practices.

  • What challenges or successes have you had welcoming security professionals to your DevOps practices?
  • How is the role of security changing in your organization?
  • How have your practices for handling zero-day vulnerabilities changed?

Other Tracks

The ChefConf CFP is open for the following tracks:

  • Infrastructure Automation
  • Compliance Automation
  • Application Automation
  • People, Process, and Team
  • Delivering Delight
  • Chaos Engineering
  • Don't Label Me!

Share Your Story

Your story and experiences are worth sharing with the community. Help others learn and further your own knowledge through sharing. The ChefConf CFP is open now. Use some of the questions posed here to help form a talk proposal for the compliance automation track.

Submit your talk proposal now! The deadline is Wednesday, January 10, 2018 at 11:59 PM Pacific time.

The post ChefConf 2018 CFP: Compliance Automation Track appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Modernizing Deployments with Chef and Habitat at Media Temple



----
Modernizing Deployments with Chef and Habitat at Media Temple
// Chef Blog

Media Temple has been a leading web hosting provider for nearly 20 years. As a Software Developer there, George Marshall is responsible for focusing on products that help their customers realize their goals in taking their ideas online. We recently interviewed George about how his team is using Chef and Habitat. You can watch a recording of the interview below. George told us how crucial automation has become in day to day application deployments, and in particular he shared his excitement about working with Habitat.

Big Ideas, Small Shippable

"…since you're only pulling in what you need, you're able to have a very, very small deliverable to your end production."

A theme George kept coming back to is the value of having very small shippable components. In traditional environments, on bare metal servers or VMs, typically we'll start with an operating system image with its own pre-installed software and libraries, as well as organization-wide software baselines, onto which additional dependencies will need to be installed before we're finally able to deploy our application.

As systems have grown more complex over time, even so-called "minimal" installs can have a lot of moving parts. What's worse, it can be difficult to discern which elements are tied to the function of the underlying operating system, which are tied to components of our application, and which aren't in use by anything at all! This can cause a variety of challenges, from making it difficult to predict the impact of software upgrades, to inconsistencies between environments causing much-dreaded "but it worked on my machine!" issues. In any case, these concerns are apt to slow our development pace to a crawl.

With habitat, rather than starting with an operating system and building up to an application environment, we start with an application, and pull in any required dependencies to build an artifact that contains everything it needs to run, and nothing it doesn't. Whether we run those artifacts on VMs, bare metal machines, or export them to containers, they'll run consistently regardless of what is or isn't installed on the host OS. Per George, this in turn can, "…make new platforms a bit less challenging by being a little bit more agnostic… and you have a little bit more flexibility in how you want to take your deployments"

Builder: Dependencies Done Right

"What you're shipping is gonna be a better product."

As we talked, George was particularly enthusiastic about Habitat's Builder feature, and in particular its dependent build functionality. As mentioned previously, evaluating the impact of package upgrades has historically been a significant source of pain, as each upgrade might have multiple services that depend on it, and if applied inconsistently, conflicts can be difficult to diagnose and repair. Habitat plans allow you to define dependencies on other projects and configure automatic rebuilds whenever any upstream components are updated. In George's view, "The big benefit of that is the integration testing you can get out of that. And being able to make sure every component in the stack is going to work with each other and be copacetic."

In other words, since Habitat builds its artifacts with everything they need to run, these automatic builds will produce ready-to-test versions of our application without us needing to take any manual steps to parse the dependency tree or stage an upgrade.

Learn More

The post Modernizing Deployments with Chef and Habitat at Media Temple appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Application Automation with Habitat and Kubernetes



----
Application Automation with Habitat and Kubernetes
// Chef Blog

Habitat, Chef's next generation application automation framework, provides a powerful suite of integrated capabilities in service of seamlessly and continuously building, deploying, and running your application and the services that need to run to support and scale your application across a distributed infrastructure.

With Kubecon kicking off today in Austin, we are super excited to highlight a bunch of product capabilities built in partnership with our friends at Kinvolk that blend the edges of Habitat and Kubernetes to unify these two powerful tools and ecosystems into one awesome application delivery experience.

Kubernetes is a platform that runs containerized applications and supports container scheduling, orchestration, and service discovery. It allows you to abstract a datacenter of computers into well managed compute resources to power your workloads as you continuously update and scale them. When you use Habitat with Kubernetes, Habitat manages the application building, packaging, and deployment, and then Kubernetes manages the infrastructure, container scheduling, orchestration, and service discovery.

The end goal of using Habitat and Kubernetes together is to power your developers to be able to continuously build and deploy application artifacts using Habitat's Builder automation. These artifacts will automatically deploy to their Kubernetes staging clusters, and when ready, developers can promote application updates to the production clusters simply by running hab pkg promote --production my/software/1.2.3/20171201134501.

Towards this, today we are announcing some major updates to our Habitat Operator for Kubernetes, including the ability to promote application artifacts between clusters as part of your  continuous delivery practice.

We have fully supported Kubernetes packages hosted at Habitat Builder, so you can build and deploy your Kubernetes clusters and update them as Kubernetes is updated, using Habitat's Builder capabilities. This lets you take advantage of Habitat's immutable build guarantees of reproducibility, isolation, portability, and atomicity to run your cluster wherever you need to.

We are also introducing a Habitat Kubernetes exporter to add to our exporter options. This means you can export all of your Habitat built artifacts using `hab pkg export kubernetes` and create a docker container with a Kubernetes manifest that can be deployed to a Kubernetes cluster running the Habitat Operator.

Read more on how to get started with Habitat and Kubernetes

Give it a try and join the community

The post Application Automation with Habitat and Kubernetes appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Integrating Chef into your VSTS Builds



----
Integrating Chef into your VSTS Builds
// Chef Blog

Here at Chef we have been working with Microsoft to create integrations for Visual Studio Team Services (VSTS).

VSTS is an invaluable and versatile tool to bring together pipelines within your environments. The pipelines are made up of tasks. These tasks are either core tasks built into VSTS (e.g. created by Microsoft) or available in the VSTS Marketplace/. It is in the marketplace that you will find the Chef Integration extension.

In a recent webinar we hosted on November 17, 2017, Eugene Chuvyrov, Senior Software Engineer at Microsoft, talks about the importance of VSTS, what it is capable of, and how it is part of the larger DevOps toolset. VSTS has a number of plugins that can be installed into your account that allow it to perform a myriad of tasks. A recording of the webinar is available at the end of this post.

VSTS Plugins

By installing the Chef extension into your VSTS account you get the following tasks:

It should be noted that although these tasks have been assigned to different build phases, they do not have to be used in those phases. Indeed the live demonstration in the webinar shows how the tasks can be used to create a Chef Cookbook pipeline. To do this the Upload Cookbook to Chef Server task is used in the release phase.

As mentioned, the live demonstration shows how VSTS can be used as a source repository for your cookbook(s), using Git, to create a build and release pipeline. The workflow for this is shown below:

VSTS Cookbook Pipeline

As can be seen from the figure above there are a number of build tasks that can be triggered either manually, by a code checkin or on a schedule (think nightly builds). This will then create a cookbook artifact which is placed in a drop location so that the Release process can get hold of it.

A Release can be created on a trigger, e.g. when a new artifact has been published. This is not the same as performing a release, it just creates the release from which deployments are performed. Of course it is possible to have an automatic deployment when a release is created, but this would be a trigger on the release enviornment itself.

We are continually developing and enhancing the VSTS extension. Things that are in the backlog at the moment are:

  • Ensure the extension works on Windows
  • Create an 'Install ChefDK' task
  • Create tasks to perform linting operations so that it is not a command line task
  • Create task to run Test Kitchen

To see the tasks that are being worked on please refer to the VSTS Chef Extension Issues page.

For information on how to use the tasks please refer to the VSTS Chef Extension Wiki.

For a step by step guide of what is performed in the webinar please refer to the blog post Chef Cookbook Pipeline with VSTS. Please note that this was created before the 'Publish to Supermarket' task was available.

If you would like to see the webinar again or you missed it the first time around please find it below.

The post Integrating Chef into your VSTS Builds appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Transforming IT Operations with Chef and Schuberg Philis



----
Transforming IT Operations with Chef and Schuberg Philis
// Chef Blog

Schuberg Philis is an IT services company that helps their customers realize their digital transformation goals. Joris van Lieshout is a Mission Critical Engineer at Schuberg Philis, and is responsible for designing and supporting infrastructure for a wide array of customers. This past spring we had a conversation with Joris to discuss how his team is using Chef to help his clients achieve unprecedented velocity and creativity while maintaining the environmental security their customers depend on.

Achieving Velocity in Regulated Spaces

"We achieve a more secure environment by using a tool like Chef because we create reliable environments where change is not done manually, but automated."

With the growing size and complexity of modern environments, organizations often find it difficult to release updates with the velocity they'd like due to the potential for changes to bring with them performance or security concerns. Nowhere is this more keenly felt than in industries subject to regulatory requirements, where the stakes are high, and a hastily prepared deploy could have a profound impact on their compliance audits. Because of this, Joris told us, many of his customers were limited to at best quarterly, and in some cases even annual release cycles. Much of this was due to the fact that existing practices for deploying complex applications involved many manual steps, each adding a layer of risk and necessitating a slow, deliberate release cadence.

With Chef, the Schuberg Philis team is able to help their customers automate the configuration and deployment of their applications, allowing them to deploy with greater ease and consistency, which in turn makes validating their compliance that much easier. This allows for more frequent releases, even in highly regulated spaces — as Joris told us, "We have quite a few banking customers where we can do weekly or every two week production releases of new features…" In other words, automation brought with it an increase in speed and efficiency, while also reducing the risks inherent to each deploy.

Automate the Backlog, and Drive New Innovation

"Everything you do twice should be automated…"

An often overlooked cost of managing large environments manually is that even ostensibly trivial changes can be time consuming to implement. Add to that the typical issues or maintenance that are a part of day to day operations, and the result is often IT organizations that are struggling to keep up with backlogs of requests. With each new task a team automates, there's one less problem that needs to be re-solved in the future. In turn, this makes recurring issues easier to address, requests easier to fulfill, and operations teams able to spend more time planning and less time reacting. Or as Joris observed in a number of his customers, "…instead of being the department that says no to the business because they're too busy… [IT Operations is] now a platform for business to experiment and create new functionality."

Next Steps

The post Transforming IT Operations with Chef and Schuberg Philis appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Continuous Automation Using HPE Synergy and Chef Automate



----
Continuous Automation Using HPE Synergy and Chef Automate
// Chef Blog

As we kick off HPE Discovery today in Madrid, it's a great opportunity to look at how HPE and Chef are working together to help IT organizations build and manage great applications securely at scale. HPE and Chef have partnered together to bring cloud-like speed and automation to the world of bare metal hardware. You can increase speed, become more efficient, and reduce your risk by using Chef Automate along with HPE OneView to manage your Synergy hardware. The Chef OneView cookbook gives you programmatic access to all aspects of your infrastructure, and allows you to build complex application stacks or virtual machines quickly and efficiently in your own data center. Once everything is built correctly you can continuously monitor your infrastructure for compliance using Chef Automate.

No App Left Behind

HPE has introduced the Synergy platform which brings modern infrastructure as code, or composable infrastructure, to the world of bare metal server hardware. Now, you can build compute, storage, and network resources from a fluid, dynamic pool of hardware located in your own data center. HPE Synergy can be managed via HPE OneView, which is a standard API platform for configuring all HPE hardware. This enables a hybrid approach where the same rack of blade servers can be used to run both traditional legacy applications, as well as more modern applications that run inside virtual machines or containers.

Chef Automate for Synergy

To help Synergy customers get the most from their investment, Chef is bringing infrastructure as code into the realm of Synergy hardware provisioning. The OneView Chef Cookbook contains reusable Chef resources that you can use to build your network, compute, and storage infrastructure. Instead of manually racking, stacking, cabling, and configuring machines, you can focus on delivering new application features and capabilities to your customers. HPE calls this the idea economy. Disruption is all around us, and the ability to turn an idea into a new product or a new industry is more accessible than ever before. Modern software-driven organizations need to move fast, be more efficient, and remain compliant with security requirements. Chef and HPE can help you achieve these goals and ship your ideas faster.

Let's start by taking a closer look at HPE OneView. HPE OneView offers a software-defined approach to managing infrastructure programmatically so you can deploy infrastructure faster, simplify lifecycle operations, and improve productivity with efficient workflow automation and a modern dashboard.

HPE OneView allows you to:

Deploy infrastructure faster – IT generalists can quickly respond to changing business requirements by rapidly and reliably composing and updating compute, storage, and network resources using automated templates created by IT specialists.

Simplify lifecycle operations – Simplify IT operations with a single, unified view of the health of thousands of servers, profiles, and enclosures across multiple data centers using HPE OneView Global Dashboard.

Increase productivity – Automate resource provisioning, configuration. and monitoring with the HPE OneView unified API. Software developers and ISVs can deploy infrastructure as code for more aligned, responsive service delivery.

Simplify your hybrid IT environment – Transform servers, storage, and networking into software-defined infrastructure to eliminate complex manual processes, spur IT collaboration, and increase the speed and flexibility of IT service delivery.

Everything begins with the standard Application Programming Interface, or API. OneView provides an API that you can interact with using a configuration management system like Chef. HPE and Chef have partnered to provide reusable code resources in the OneView Chef cookbook, that can provision all of your Synergy environments without resorting to error-prone and slow manual processes.

Learn more about Chef and HPE

The post Continuous Automation Using HPE Synergy and Chef Automate appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

ChefConf 2018 CFP: Infrastructure Automation Track



----
ChefConf 2018 CFP: Infrastructure Automation Track
// Chef Blog

ChefConf is the largest community reunion and educational event for teams on the journey to becoming fast, efficient, and innovative software-driven organizations. In other words, you and your team!

ChefConf 2018 will take place May 22-25 in Chicago, Illinois and we want you to present! The ChefConf call for presentations (CFP) is now open.

A number of tracks have been announced for the conference and we will be describing those tracks in a bit more detail with posts similar to this one. Use these posts for inspiration as you develop a presentation to submit to the CFP or as a preview of what you can expect as an attendee of the conference in May.

Infrastructure Automation

Infrastructure automation is the process by which we automate the provisioning, installation, configuration, and ongoing maintenance of computers within our environment. The ideas, processes, and tools used to manage infrastructure as code are great topics for ChefConf.

Assessing Current State

One of the first steps to automating an environment is getting a handle on the current state of that environment. Simply running the chef-client on a machine that reports data to Chef Automate is a great first step in that process. Ohai, the system profiler, gathers thousands of system attributes, the chef-client sends this data off to Chef Automate. This data is organized and exposed so that queries can be executed to help answer questions about the environment. Questions like "what versions of which operating systems are running?", "how many CPUs do a group of servers have?", etc.

  • Are you feeding the Chef data into a separate configuration management database (CMDB)?  
  • What inconsistencies are you uncovering with this data?  
  • What custom ohai plugins have you written or are you using?

Modeling Desired State

Managing infrastructure as code allows us to model the desired state of our infrastructure. Chef accomplishes this through things like resources, recipes, cookbooks, roles, and policyfiles. The extensibility of Chef allows for the creation of custom resources to help make the model more clear and easier to reason about. Cookbooks and other policy artifacts can be jointly developed by a team of engineers and can also be shared both within a company and across the entire community.  

  • How are you modeling the desired state of your infrastructure?  
  • What custom resources have you built?  
  • How do you share cookbooks and other policy artifacts across your community?

Cloud Automation and Migrations

Building in and migrating to the cloud is fundamentally different than deploying to a data center. Chef and other tools allow you to automate this process. Being in the cloud changes the economics and approaches to things like disaster recovery, scaling up and down, and geolocation of running services.

  • Which clouds are you using?
  • How has moving to the cloud changed your approach?
  • What tools are you using in conjunction with Chef to help manage your cloud instance?

Local Development

The Chef ecosystem is full of tools that make it easy to take a test-driven approach to developing infrastructure code. Development begins on the developer's workstation or laptop. Developers can validate their code locally using tools including cookstyle, Foodcritic, ChefSpec, Test Kitchen, InSpec, and more. Getting everyone on the team using a similar set-up and development process is important.

  • What does a day in the life of an infrastructure engineer look like?
  • What are you doing to help new developers get their environments set-up quickly?
  • How are you leveraging local testing in your development practices?

Getting Started

Everyone was new to Infrastructure Automation at one point. You do not need to be an expert to help others get started. A lot of experiences from the early days of automation are worth sharing, even if as cautionary tales. ChefConf is a great place to help fellow community members get started on the right foot.

  • What do you wish you knew when you first got started?
  • How are you helping people across your organization get started with infrastructure automation?
  • Which use cases are well-suited for getting started with infrastructure automation?

Other Tracks

The ChefConf CFP is open for the following tracks:

  • Infrastructure Automation
  • Compliance Automation
  • Application Automation
  • People, Process, and Team
  • Delivering Delight
  • Chaos Engineering
  • Don't Label Me!

Share Your Story

Your story and experiences are worth sharing with the community. Help others learn and further your own knowledge through sharing. The ChefConf CFP is open now. Use some of the questions posed here to help form a talk proposal for the infrastructure automation track.

Submit your talk proposal now! The deadline is Wednesday, January 10, 2018 at 11:59 PM Pacific time.

The post ChefConf 2018 CFP: Infrastructure Automation Track appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Why do Enterprise Organizations Need Communities of Practice?



----
Why do Enterprise Organizations Need Communities of Practice?
// Chef Blog

Many large enterprises and mid-sized organizations are looking to build and strengthen their internal culture to improve knowledge sharing, increase working morale and improve staff retention. Only a few years ago these same companies would have turned to Centres of Excellence (CoE) but instead they are looking to leverage Communities of Practice (CoP). 

So what's the difference I hear you ask? Aren't they the same thing? Well, a 'centre' is exclusive, invite only, members and regulars admission, while 'excellence' refers to the fact that the group are experts with nothing more, or very little to learn. Conversely a 'community' is inclusive, can extend across the whole organization and all are welcome to join without invite. The word 'practice' delivers the perception of individuals honing their skill, continually learning and sharing from their experience. 

(Photo Chef Team: Victoria Jeffrey, Hannah Maddy, Anthony Rees, Nathen Harvey and Matt Ray.)

I recently had the pleasure of spending over a week with Nathen Harvey, VP of Community at Chef. I consider myself a practitioner with a thirst for knowledge and I jumped at the chance to soak up some of Nathen's experience around building, fostering and sustaining communities, some of which I hope to share with you here. 

Ubuntu is often referred to by Nathen. Apart from being an open source linux distribution, Ubuntu is used in a more philosophical sense to mean "the belief in a universal bond of sharing that connects all humanity". This further embodies the art that is Community and I felt it was an excellent mantra to remember and refer to when building or participating in a CoP. When talking about Community, Nathen also refers to Adam Jacob's Keynote from ChefConf in 2016 where he describes building Humane Systems and leaves the audience with the thoughts: " I am because you are; When you suffer, I suffer; When you thrive, I thrive." A simple lesson we could all remember in both our work and home lives.

One area that intrigued me was Nathen's approach to IT based outages. He uses and practices the Critical Incident Response technique adopted by specialist response units like Fire Brigades, Emergency Services and Law Enforcement. As part of this approach, the first person on the scene takes the role of Incident Commander. This is due to the fact that they have the most context and understanding of the issue or outage. From here a set ceremony script is followed with the Commander officially calling an incident and therefore allowing them to engage the help of any team members required to fix the situation. At Chef, the Incident Commander also fulfil's the role of the central internal and external communications officer providing status updates and shielding the team from constant questions, allowing them to get on and just fix the problem. 

This role of Incident Commander uses the OODA methodology; Observe, Orientate, Decide and Act. This technique is used by each member of the team allowing them to take the time to observe the impact,  orientate themselves to understand the extent of the issue and any flow on impact, collectively decided on the best way to fix the problem and finally act on the solution. 

The post-mortem is also quite unique. Everyone at Chef is invited to attend (Yes! The whole organization!) and if it impacts the Chef community, then it is often run on Google Hangouts and streamed to YouTube for those that cannot attend to watch later and provide feedback. All this further extends the 'inclusion' and strengthens the community as a whole. 

If you are interested in watching a YouTube post-mortem hosted by Nathen, check out the video below.

Again, the ceremony of the post-mortem opens stating that this is not to lay blame or point fingers but to improve the way we work together and learn from the incident. The post-mortem really concentrates on two fundamental questions.

  • How could we have detected the issue faster?
  • How could we have resolved the issue sooner?

Nothing else matters now at this stage. The incident is done, it has occurred, its fixed and you can't turn back time…but you can learn from the event. 

The Chef community uses Slack for asynchronous communications rather than email and wherever possible video chat for meetings. Once a year the distributed community is brought together, face-to-face to meet, learn from each other and provide feedback, often over a beer or two!

So let me ask you this: is your organization fostering, building and supporting communities of practice? If not, WHY?

Continue the Conversation

The post Why do Enterprise Organizations Need Communities of Practice? appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Wednesday, November 15, 2017

Russian 'Fancy Bear' Hackers Using (Unpatched) Microsoft Office DDE Exploit



----
Russian 'Fancy Bear' Hackers Using (Unpatched) Microsoft Office DDE Exploit
// The Hacker News

Cybercriminals, including state-sponsored hackers, have started actively exploiting a newly discovered Microsoft Office vulnerability that Microsoft does not consider as a security issue and has already denied to patch it. Last month, we reported how hackers could leverage a built-in feature of Microsoft Office feature, called Dynamic Data Exchange (DDE), to perform code execution on the

----

Read in my feedly


Sent from my iPhone

Firefox 57 "Quantum" Released – 2x Faster Web Browser



----
Firefox 57 "Quantum" Released – 2x Faster Web Browser
// The Hacker News

It is time to give Firefox another chance. The Mozilla Foundation today announced the release of its much awaited Firefox 57, aka Quantum web browser for Windows, Mac, and Linux, which claims to defeat Google's Chrome. It is fast. Really fast. Firefox 57 is based on an entirely revamped design and overhauled core that includes a brand new next-generation CSS engine written in Mozilla's Rust

----

Read in my feedly


Sent from my iPhone

17-Year-Old MS Office Flaw Lets Hackers Install Malware Without User Interaction



----
17-Year-Old MS Office Flaw Lets Hackers Install Malware Without User Interaction
// The Hacker News

You should be extra careful when opening files in MS Office. When the world is still dealing with the threat of 'unpatched' Microsoft Office's built-in DDE feature, researchers have uncovered a serious issue with another Office component that could allow attackers to remotely install malware on targeted computers. The vulnerability is a memory-corruption issue that resides in all versions of

----

Read in my feedly


Sent from my iPhone

How to Host a Deep Web IRC Server



----
How to Host a Deep Web IRC Server
// Null Byte « WonderHowTo

Internet Relay Chat, or IRC, is one of the most popular chat protocols on the internet. This technology can be connected to the Tor network in order to create an anonymous and secure chatroom, without the use of public IP addresses. IRC servers allow one to create and manage rooms, users, and automated functions, among other tools, in order to administer an instant messaging environment. IRC's roots began in 1988 when Jarkko Oikarinen decided to attempt to implement a new chat protocol for users at the University of Oulu, Finland. Since then, it's been widely adopted and used as a lightweight... more


----

Read in my feedly


Sent from my iPhone

How to Host a Deep Web IRC Server for More Anonymous Chatting



----
How to Host a Deep Web IRC Server for More Anonymous Chatting
// Null Byte « WonderHowTo

Internet Relay Chat, or IRC, is one of the most popular chat protocols on the internet. This technology can be connected to the Tor network in order to create an anonymous and secure chatroom, without the use of public IP addresses. IRC servers allow one to create and manage rooms, users, and automated functions, among other tools, in order to administer an instant messaging environment. IRC's roots began in 1988 when Jarkko Oikarinen decided to attempt to implement a new chat protocol for users at the University of Oulu, Finland. Since then, it's been widely adopted and used as a lightweight... more


----

Read in my feedly


Sent from my iPhone

pfSense 2.4.1-RELEASE Now Available



----
pfSense 2.4.1-RELEASE Now Available
// Netgate Blog

We are excited to announce the release of pfSense® software version 2.4.1, now available for new installations and upgrades!


----

Read in my feedly


Sent from my iPhone

A Great Time to Take A Look at XenServer Enterprise!



----
A Great Time to Take A Look at XenServer Enterprise!
// Latest blog entries

Good afternoon everyone,

As we make our way through the last quarter of the year, I wanted to remind the community of the significant progress the XenServer team has achieved over the last 18 months to make XenServer the awesome hypervisor that it is today!

While many of you have been making the most of your free XenServer hypervisor, I would like to take this opportunity to review just a few of the new features introduced in the latest releases of the Enterprise edition - features that our customers have been using to optimize their application and desktop virtualization deployments.

For starters, we've instrumented automated updates and live patching, features that streamline the platform upgrade process by enabling multiple fixes to be installed and applied with a single reboot and in many cases, no reboot whatsoever, significantly reducing downtime for environments that require continuous uptime.

We've also worked with one of our partners to introduce a revolutionary approach to securing virtual workloads, one that is capable of scanning raw memory at the hypervisor layer to detect, protect and remediate against the most sophisticated attacks on an IT environment. This unique approach provides an effective line of defense against viruses, malware, ransomware and even root kit exploits. What's more, this advanced security technique complements security mechanisms already implemented to further strengthen protection of critical IT environments.

Providing a local caching mechanism within the XenServer hypervisor enables our virtual desktop customers to dramatically improve the performance of their virtual desktops, particularly during boot storms. By caching requests for OS image contents in local resources (i.e., memory and storage), XenServer is able to work with Provisioning Services to stream contents directly to virtual desktops, reducing resource utilization (network and CPU) while enhancing user productivity.

Expanded support for virtual graphics allows our customers to leverage their investments in hardware from the major graphics vendors and enable GPU-accelerated virtual desktops that effectively support graphics-intensive workloads.

Designing, developing and delivering features that bring out the best in virtualization technologies... that's our focus. And thanks to the invaluable insight and feedback provided by this community, will continue to be the driving force behind our innovation efforts.

Interested in evaluating the features described above, click here.

Until next time,

Andy

 


----

Read in my feedly


Sent from my iPhone

ChefConf 2018 Call for Presenters is Open



----
ChefConf 2018 Call for Presenters is Open
// Chef Blog

ChefConf is the largest community reunion and educational event for teams on the journey to becoming fast, efficient, and innovative software-driven organizations. In other words, you and your team!

ChefConf 2018 will take place May 22-25 in Chicago, Illinois and we want you to present! The ChefConf call for presenters (CFP) is now open.

ChefConf attendees are hungry for learning and sharing and are eager to hear of your success, experiments, failures, and learnings. Share how you have adopted new workflows, tools, and ways of working as a team. Describe your journey toward becoming outcome-oriented. What have you done to improve speed, increase efficiency, and reduce risk throughout the system? Continuous learning is the name of the game and your experiences are worth sharing!

CFP Basics

Deadline: Wednesday, January 10, 2018 at 11:59 PM Pacific time.

Track themes:

  • Infrastructure Automation
  • Compliance Automation
  • Application Automation
  • People, Process, and Team
  • Delivering Delight
  • Chaos Engineering
  • Don't label me!

Full descriptions of each track can be found on the ChefConf site.

Why Submit a Talk?

ChefConf is the largest gathering of the Chef community. Community is driven by sharing: stories, experiences, challenges, successes, and everything in between. By presenting at ChefConf you are supporting the growth and health of the Chef community.

ChefConf is an ideal platform to spotlight an awesome project you and your team have delivered. Giving insight into your challenges, success, and knowledge will inspire others to take Automation, DevOps, and Site Reliability even further.

Are you trying to build your own brand or speaker profile? ChefConf gives you a great opportunity to expand your marketability, while helping others do the same.

Take a "talk-driven development" approach and propose a session on something in the Chef universe you are keen to learn more about. This approach will give you even more motivation to learn something new and share it with the community.

There a numerous other reasons to submit, from exercising your storytelling skills to taking advantage of the myriad speakers-only swag and green room amenities. Whatever your motivation, we cannot wait to see your proposal!

What Makes for a Good Proposal?

Be clear, be concise, and be compelling. We received hundreds of submissions for 2017, so brevity is appreciated and ensures your submission will be given thorough consideration.

The best abstracts include a title that clearly states the topic in an interesting way and complete information on the topic and type of talk. If a demo will be involved, let us know and describe it. Bringing in a co-presenter? That's awesome and should be detailed in the proposal.

We've shared details on each track on the ChefConf website. The ChefConf team is also happy to help with your submission, just email us at chefconf@chef.io.

ChefConf 2018 will be here before you know it — we hope to see you presenting in Chicago!

Submit your proposal now.

The post ChefConf 2018 Call for Presenters is Open appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone