Friday, December 2, 2016

Chef Automate Now Available as Fully Managed Service on AWS [feedly]

Chef Automate Now Available as Fully Managed Service on AWS

-- via my feedly newsfeed

"AWS OpsWorks for Chef Automate" Brings Full Chef Platform to AWS Customers for Fast and Secure Management of Cloud and On-Premises Infrastructure at Any Scale

LAS VEGAS – December 1, 2016 – Today at AWS re:Invent 2016, Chef announced that the full capabilities of Chef's commercial platform have been integrated into a new managed service provided, supported, and backed by Amazon Web Service (AWS). AWS OpsWorks for Chef Automate makes configuring, deploying, and scaling cloud and on-premises infrastructure simple and secure by automating infrastructure as code. The new joint solution gives AWS customers access to Chef's entire platform in just a few clicks, while also taking care of all Chef maintenance, from provisioning environments to backups. AWS OpsWorks for Chef Automate underpins core components of DevOps by automating everything from cloud migration to continuous application deployment, accelerating time-to-value for software development and reducing infrastructure management.

"Many of our customers have been using Chef for years to easily manage their cloud infrastructure. This new solution delivers robust automation capabilities with almost no setup or maintenance required," said Scott Wiltamuth, VP of Developer and Productivity Tools, AWS. "AWS OpsWorks for Chef Automate makes it even easier for customers to ensure their cloud and on-premises resources are fast and secure. Development and operations teams have a strong foundation for leveraging DevOps practices in AWS or on-premises to turbocharge software delivery."

With AWS OpsWorks for Chef Automate, customers get a comprehensive automation product bringing together best-in-class cloud and automation capabilities to enable DevOps practices at any scale. Key benefits of AWS OpsWorks for Chef Automate include:

  • Flexibility: Automate any Linux or Windows server including existing Amazon EC2 instances or servers running in a data center. Easily deploy, migrate and manage workloads between environments to maximize IT efficiency.
  • Scalability: The Chef Server can manage tens of thousands of servers with a single instance, enabling customers to easily manage massive AWS environments. AWS OpsWorks enables users to easily select and change the instance type the Chef Server runs on to best align with usage patterns and capacity needs.
  • Easy to administer: No infrastructure or software installation required. AWS OpsWorks manages all Chef administration for the user, from setup to backup and restore.
  • Secure: The compliance features of Chef Automate help you make your servers more secure, with compliance policies that can be written in human-readable language using Chef's open source InSpec language, and automated as code across your infrastructure. And the managed Chef Server itself runs in an Amazon Virtual Private Cloud, so it is isolated from the internet and the user has complete control of inbound and outbound network access.
  • Visibility: The visibility feature of Chef Automate provides insight into operational, compliance, and workflow events to ensure you know everything about your environments at all times.

AWS OpsWorks for Chef Automate is available immediately. AWS OpsWorks will provision a ready-to-use Chef environment in minutes and manage all back-end operations, enabling customers to focus on high-value operations including continuous delivery of infrastructure and applications, and automating compliance policies as code.

"Accelerating time-to-value for software development is only possible by combining cloud, automation at scale, and DevOps," said Ken Cheney, Chief Marketing Officer, Chef. "AWS minimizes the calories organizations burn on infrastructure, while Chef delivers a truly scalable automation platform built for DevOps workflows. We're excited to get AWS OpsWorks for Chef Automate in customers' hands so they can increase IT velocity and drive real business outcomes."

For more information about AWS OpsWorks for Chef Automate, visit Chef's booth, #734, at AWS re:Invent or go here.

The post Chef Automate Now Available as Fully Managed Service on AWS appeared first on Chef Blog.

Inside the new AWS OpsWorks for Chef Automate service [feedly]

Inside the new AWS OpsWorks for Chef Automate service

-- via my feedly newsfeed

We're excited about this morning's announcement at AWS re:Invent unveiling the new AWS OpsWorks for Chef Automate service. This service helps anyone get started quickly with Chef Automate in a low-risk low-friction way with all the benefits you'd expect from any native AWS service. This launch is the result of a tight partnership between Chef Software and AWS. We're all very happy to be able to share this news with you.

In case you missed Werner Vogel's keynote, AWS OpsWorks now has a new native service powered by Chef. "AWS OpsWorks for Chef Automate" is a new service in the AWS catalog and it is available today, right now, ready for you to use. As with any other native AWS service, you can start it from the AWS Management Console or from CLI tools.

Who does this impact?

The new service introduces important changes for two specific sets of users: existing OpsWorks users and current non-OpsWorks Chef users.

For existing OpsWorks users, the OpsWorks service you previously knew is now available as "AWS OpsWorks Stacks". AWS OpsWorks Stacks was built on a forked version of open-source Chef and operates in a serverless mode. As a result, historically, OpsWorks users haven't been able to use the full ecosystem of Chef tools (e.g. test-kitchen, community cookbooks, etc). Recently AWS OpsWorks made many improvements to ease the gap, but significant barriers remain. That mode of operating is still available as AWS OpsWork Stacks.

For Chef users, this means you now have a low-friction and low-risk way to get started with Chef Automate. Chef Automate is our commercial offering that works with our open-source solutions: Chef, InSpec, and Habitat. AWS OpsWorks for Chef Automate gives you a push-button way to make commercial features available on top of your open-source tools with pay-as-you-go utility pricing and all the features you would expect from an AWS managed service. You can use it to manage your infrastructure regardless of where it lives: on-premises or AWS, it doesn't matter.

What do I get with AWS OpsWorks for Chef Automate?

Fundamentally, the software bits that power on-premises Chef Automate are the same bits you get with AWS OpsWorks for Chef Automate. The difference in OpsWorks is that AWS manages the service for you by providing initial deployment and configuration of the Chef Automate software, automatic backups, a built-in restore mechanism, and automatic software updates. This is a managed Chef service that is supported by AWS. As with any AWS service, that includes pay-as-you-go pricing so you only pay for what you use.

The service includes a configured installation of Chef Automate and Chef server on the same underlying EC2 instance. You may choose to use the included Chef server or, if you're already managing your own open-source Chef server(s) elsewhere, you can add as many external Chef servers as you'd like. The Visibility features of Chef Automate will work automatically if you're using chef-client 12.16.42 or newer (older clients require configuring data collection). The Compliance features of Chef Automate are available by storing profiles on your Chef Automate instance and retrieving them using the 'audit' cookbook. The Workflow features of Chef Automate require setup of additional build nodes and use of Job Dispatch for remote execution.

Getting started

The Starter Kit generated by AWS OpsWorks for Chef Automate includes a README with some basic exercises to get started. A deeper dive into working with Chef Automate can be found in the Learn Chef AWS Opsworks for Chef Automate tutorials.

How do I get it?

At launch, AWS OpsWorks for Chef Automate is available in three regions: us-east-1, us-west-1, and eu-central-1. From the AWS Management Console select OpsWorks and you can get started using AWS OpsWorks for Chef Automate.

Give it a try and let us know how it goes for you.  There are many enhancements already in the works, but we'd love to hear your feedback.

The post Inside the new AWS OpsWorks for Chef Automate service appeared first on Chef Blog.

VMware vRealize Automation 7.0+ and Chef [feedly]

VMware vRealize Automation 7.0+ and Chef

-- via my feedly newsfeed

I'm pleased to announce two Chef vRA 7.0+ Blueprints for both Windows and Linux from Chef. They are basic, initial integrations, and have an example that demonstrates how to bootstrap the chef-client into a cataloged virtual machine.

Please note, even though the name of the Blueprint has Ubuntu, the only requirements is wget and bash, allowing the same process to work with CentOS, RHEL, debian, Ubuntu and other Linux Distros. Also, note that for the Windows machine, it should work perfectly fine with any version of Powershell 3+ and Windows.

There are plans to get a first-boot.json chef-client run, but before we go any farther, we are seeking immediate feedback. The ultimate plan is, after requesting a machine from vRA you can have a machine already verified and bootstrapped with chef.

We invite you to join our official VMware{code} slack channel located at #chef.  This is a great place for feedback on these types of integrations and also anything from the knife plugins. Help grow our community! The more feedback we get, the better we can make VMware and Chef's integration for everyone!

The post VMware vRealize Automation 7.0+ and Chef appeared first on Chef Blog.

Why Habitat? – Plans and Packages, Part 2 [feedly]

Why Habitat? – Plans and Packages, Part 2

-- via my feedly newsfeed

TL;DR: Wow, a 2000 word blog post. Habitat is a better way to package apps, and creates better containers. Walk through the Habitat tutorial here.

In part 1 of this blog post series, we talked about how Habitat approaches the problem of Application Packaging. In this part, we'll show you the difference between creating a container using traditional methods and using Habitat.

Traditional Container Building

Let's look at a typical workflow for a developer leveraging Dockerfiles to package a node.js application. The typical workflow for this developer would be starting with a Docker Hub provided image to launch node. Docker Hub contains several thousand images you can use to get started packaging your application. Using one is as simple as running a short command.

michael@ricardo-2:plans_pkg_part_2$ docker run -it node  Unable to find image 'node:latest' locally  latest: Pulling from library/node  43c265008fae: Pull complete   af36d2c7a148: Pull complete   143e9d501644: Pull complete   f6a5aab6cd0c: Pull complete   1e2b64ecebce: Pull complete   328ff1526764: Pull complete   Digest: sha256:1b642fb211851e8515800efa8e6883b88cbf94fe1d99e674575cd24a96dcc940  Status: Downloaded newer image for node:latest  > i = 5  5  > x = 5  5  > x + i  10

If you've ever submitted a ticket to get a development machine to just do your job, this is a very delightful experience by comparison. With a few characters you have a machine with Node.js to run your application. The next thing you need to do is combine that Docker Hub provided image with your actual application artifacts. That's where you'll need to start understanding more about packaging with container formats.

Container formats provide a Domain Specific Language to describe how a container should be built. This is the venerable Dockerfile in the case of Docker, or an App Image Manifest for ACI. If we're using Docker we'll need to write a Dockerfile like the below to package the application.

FROM node:latest    # Create app directory  RUN mkdir -p /usr/src/app  WORKDIR /usr/src/app    # Install app dependencies  COPY source/package.json /usr/src/app/  RUN npm install    #Bundle Config  RUN mkdir -p /usr/src/app/config  COPY source/config/config.json /usr/src/app/config    # Bundle app source  COPY source/server.js /usr/src/app/    EXPOSE 8080    CMD [ "npm", "start" ]

This is a basic example how to package a Node.js application in a container. Now from a developer perspective, this is a great experience. We still don't need to know much about how the underlying system works. We simply need to pull in the required version of the Node.js image (FROM node:latest) and copy our source code to the container.

If we build a container from this Dockerfile, we will get an image with some operating system, an installation of node, our dependencies from NPM, and our application code. Running docker images will show us the results of this image creation.

michael@ricardo-2:plans_pkg_part_2$ docker images  REPOSITORY           TAG                 IMAGE ID            CREATED             SIZE  mfdii/node-example   latest              36c6568c606b        4 minutes ago       655.9 MB  node                 latest              04c0ca2a8dad        16 hours ago        654.6 MB

Great, we have an image, but it's almost 656 MB! If we compare this to the node image that we based our container on, we see that our application has taken only 1.3 MB of space. What's that additional 654 MB? If we take a look at the Dockerfile used to create the node:latest we can begin to piece together what's going on. The Dockerfile is simple, it sets up some keys and then pulls down and installs node. It's also building from another Docker image that's based on Debian's Jessie release. If we dig into that Dockerfile we can quickly see where that 654 MB is coming from. We've downloaded a whole host of packages using the OS package manager. We could keep tracing back these Dockerfiles for some time. An easier way is to run docker history on an image and you can quickly see where the bulk of an image comes from.

If you're an traditional Operations focused engineer you might shrug your shoulders at this. Of course you need an operating system to run your application. Others among you will be guffawed at the fact we've pulled in 654 MB of "who knows what". As a developer this bulk of an operating system creates unnecessary friction for you when it's time to move to production. When it's time to ship you application you get to ship 654 MB of stuff you didn't explicitly ask for or may not even need. Of course, your operations team might want a more complete understanding of what's in your container. Are you shipping vulnerable libraries or code? Who built the underlying container? Can you trust the source of that container?

Problems with the traditional way

Harken back to part 1 and remember that containers allow us to rethink how to package our applications. If you look at different patterns and practices that have emerged regarding containers, how much of the underlying OS you require (or you should ship) is a sliding scale based on the needs of the application. For some applications you can statically link them at compile time, and create container images of only a few MB in size, with no operating system. For other applications you'll need more of the operating system, and thus you have the sliding scale as below.

You can think of the left side of the above diagram as more traditional methods of application deployment using VMs, and the right side as more modern designs such as microservices. The goal as you move to the right is to reduce the footprint of the operating system as much as possible. This is important for a few reasons:

  • Reducing the operating system creates smaller images which in turns increases the speed at which we can deploy containers from these images.
  • Reducing the operating system decreases the attack surface of the container and increases the ease at which the container can be audited for vulnerable software components.
  • Reducing the operating system footprint decreases the chance your application will consume a component of that OS, thus coupling you to that OS vendor's release cadence.

Let's look at Habitat's approach to building the same container.

Habitat's approach

To get started packaging your application into a container using Habitat you first start by defining a "plan" to package your application. Remember Habitat takes a top down approach, starting with the application concerns, rather than the bottoms up approach of starting with the operating system. Where we decide to deploy our application is something we can be concerned with later.

The Habitat plan or "" starts with some standard information that allows you to define the metadata of the application package you wish to create.

pkg_origin=myorigin  pkg_name=mytutorialapp  pkg_version=0.2.0  pkg_maintainer="The Habitat Maintainers <>"  pkg_license=()  pkg_upstream_url=  pkg_source=nosuchfile.tar.gz  pkg_deps=(core/node)  pkg_expose=(8080)

The metadata contains items such as our Habitat origin (or organization), the name of the package, version, source code location (if applicable), etc. It also allows you to declare dependencies your application needs to run. Since we are packaging the same Node.js application we packaged in the docker example, we need to declare a dependency to the Habitat node package.

This dependency statement is similar to what we did in the Docker example when we declared a dependency to the Docker provided node image (the FROM node:latest line). However, instead of declaring a dependency to an entire operating system (which is what the node:latest image gives us), we've declared a dependency to just the application runtime itself.

Once we've defined the metadata, we need to specify the lifecycle of our application build process. Habitat gives you various methods you use to define the building and installation lifecycle of your application. Virtually all application will go through one of these lifecycle stages, and Habitat allows you to define what happens in those lifecycles stages.

For our application we'll need to define the Build and Install stages of the application. We can do this by defining those methods in our

do_build() {    # copy the source code to where Habitat expects it    cp -vr $PLAN_CONTEXT/../source/* $HAB_CACHE_SRC_PATH/$pkg_dirname      # This installs the dependencies listed in packages.json    npm install  }    do_install() {    # copy our source to the final location    cp package.json ${pkg_prefix}    cp server.js ${pkg_prefix}      # Copy over the nconf module to the package that we installed in do_build().    mkdir -p ${pkg_prefix}/node_modules/    cp -vr node_modules/* ${pkg_prefix}/node_modules/  }

To complete our, we'll need to override a few other Habitat callbacks. You can see the complete plan file on the github repo for this example.

These functions we've defined are essentially the same steps we defined in our Dockerfile. The important part to note is that I've made zero assumptions about where this application is going to be run. Our Dockerfile example immediately makes the assumption that 1) you're running this application in a container (obviously), and 2) that you're running on a particular operating system (node:latest is based on Debian remember). We've made no such assumptions in our Habitat plan. What we have defined are the build lifecycle steps of our application, and explicitly called out the dependencies our application needs. Remember, we want to start with the application, not the operating system, and we've done just that.

Creating our Habitat Docker Container

Now that we've defined what the application needs to run, and how to build it, we can have Habitat build an application artifact we can then deploy in various ways. This is a simple process of entering the Habitat studio and issuing the build command. The build will create an application artifact (a Habitat Artifact or .hart file).

When we're ready to run our application inside a container, we can export this artifact and its required dependencies in a container format. The hab pkg export command gives us several options to export our application artifact. Coming back to the Dockerfile example, we can run hab pkg export docker myorigin/mytutorialapp to export our application as a container. This will query our application artifact to calculate its dependencies, including transitive dependencies, and package those dependencies into our Docker container. The export will also include a lightweight OS, and the Habitat supervisor (which we will discuss more in Part 3).

If we look at the container we export with Habitat, you'll see we've created a much slimmer container.

michael@ricardo-2:plans_pkg_part_2$ docker images  REPOSITORY           TAG                 IMAGE ID            CREATED             SIZE  mfdii/node-example   latest              36c6568c606b        40 minutes ago      655.9 MB  node                 latest              04c0ca2a8dad        16 hours ago        654.6 MB  mfdii/mytutorialapp  latest     534afd80d74d        2 minutes ago       182.1 MB

Why this matters

All in all, we end up with a container that contains only the artifacts that are required to run our application. We don't have a container with a mystery 650 MB that comes from some unknown place. This behavior is because we've started at the application first, defined the build lifecycle in our, and explicitly declared our dependencies.  Habitat can use this info to then build the minimum viable container we need to run our application. We end up with a container that's slimmer (182MB vs 655MB), and only contains the concerns required.

This idea of a minimum viable container starts to become important when you think of auditing containers to verify your not running a library or binary with known vulnerabilities. The surface area of this container is smaller and thus easier to audit, and the container build system (Habitat in this case) has explicit knowledge about what we included inside this container.

The Habitat build system also makes for a much more portable application. Because we haven't explicitly declared a container format in connection with defining our application requirements, like we did with our Dockerfile, we can export our application to a variety of formats (Docker, ACI, Mesos, or a tar.gz). We could extend this further by offering more export formats such as VM images.

What's Coming Up

In the next blog post in our series, we'll talk more about running the actual application artifact Habitat creates, and the features of Habitat that let you inject configuration into your application at run time. In the meantime, try the Habitat tutorial. It walks you through packaging up the example Node.js application we've talked about.

You might also be interested in our webinar, "Simplifying Container Management with Habitat." Watch to lean how Habitat makes building, deploying, and running your applications in containers simple, no matter how complex the production environment.

The post Why Habitat? – Plans and Packages, Part 2 appeared first on Chef Blog.

Thursday, December 1, 2016

Mozilla and Tor Warn of Critical Firefox Vulnerability, Urge Users to Update [feedly]

Mozilla and Tor Warn of Critical Firefox Vulnerability, Urge Users to Update

-- via my feedly newsfeed 

Mozilla and Tor have published browser updates to patch a critical Firefox vulnerability used to deanonymize users (via ArsTechnica).

Privacy tool Tor is based on the open-source Firefox browser developed by Mozilla, which received a copy of the previously unknown JavaScript-based attack code yesterday. Mozilla said in a blog post that the vulnerability had been fixed in a just-released version of Firefox for mainstream users.

The code execution flaw was reportedly already being exploited in the wild on Windows systems, but in an advisory published later on Wednesday, Tor officials warned that Mac users were vulnerable to the same hack.
"Even though there is currently, to the best of our knowledge, no similar exploit for OS X or Linux users available, the underlying bug affects those platforms as well. Thus we strongly recommend that all users apply the update to their Tor Browser immediately."
The exploit is capable of sending the user's IP and MAC address to an attacker-controlled server, and resembles "network investigative techniques" previously used by law-enforcement agencies to unmask Tor users, leading some in the developer community to speculate that the new exploit was developed by the FBI or another government agency and was somehow leaked. Mozilla security official Daniel Veditz stopped short of pointing the finger at the authorities, but underlined the perceived risks involved in attempts to sabotage online privacy.
"If this exploit was in fact developed and deployed by a government agency, the fact that it has been published and can now be used by anyone to attack Firefox users is a clear demonstration of how supposedly limited government hacking can become a threat to the broader Web."
The Firefox attack code first circulated on Tuesday on a Tor discussion list and was quickly confirmed as a zero-day exploit – the term given to vulnerabilities that are actively used in the wild before the developer has a patch in place.

Discuss this article in our forums

SHA-1 Certificates in Chrome [feedly]

SHA-1 Certificates in Chrome

-- via my feedly newsfeed

Posted by Andrew Whalley, Chrome Security
We've previously made several announcements about Google Chrome's deprecation plans for SHA-1 certificates. This post provides an update on the final removal of support.

The SHA-1 cryptographic hash algorithm first showed signs of weakness over eleven years ago and recent research points to the imminent possibility of attacks that could directly impact the integrity of the Web PKI. To protect users from such attacks, Chrome will stop trusting certificates that use the SHA-1 algorithm, and visiting a site using such a certificate will result in an interstitial warning.
Release schedule
We are planning to remove support for SHA-1 certificates in Chrome 56, which will be released to the stable channel around the end of January 2017. The removal will follow the Chrome release process, moving from Dev to Beta to Stable; there won't be a date-based change in behaviour.

Website operators are urged to check for the use of SHA-1 certificates and immediately contact their CA for a SHA-256 based replacement if any are found.
SHA-1 use in private PKIs
Previous posts made a distinction between certificates which chain to a public CA and those which chain to a locally installed trust anchor, such as those of a private PKI within an enterprise. We recognise there might be rare cases where an enterprise wishes to make their own risk management decision to continue using SHA-1 certificates.

Starting with Chrome 54 we provide the EnableSha1ForLocalAnchors policy that allows certificates which chain to a locally installed trust anchor to be used after support has otherwise been removed from Chrome. Features which require a secure origin, such as geolocation, will continue to work, however pages will be displayed as "neutral, lacking security". Without this policy set, SHA-1 certificates that chain to locally installed roots will not be trusted starting with Chrome 57, which will be released to the stable channel in March 2017. Note that even without the policy set, SHA-1 client certificates will still be presented to websites requesting client authentication.

Since this policy is intended only to allow additional time to complete the migration away from SHA-1, it will eventually be removed in the first Chrome release after January 1st 2019.

As Chrome makes use of certificate validation libraries provided by the host OS when possible, this option will have no effect if the underlying cryptographic library disables support for SHA-1 certificates; at that point, they will be unconditionally blocked. We may also remove support before 2019 if there is a serious cryptographic break of SHA-1. Enterprises are encouraged to make every effort to stop using SHA-1 certificates as soon as possible and to consult with their security team before enabling the policy.

Wednesday, November 23, 2016

Xen Orchestra 5.4 [feedly]

Xen Orchestra 5.4

-- via my feedly newsfeed

Here is a new release of Xen Orchestra, the management stack for XenServer.

In this release: better UI, new views, Slack plugin for backup reports… and many more!

As usual, the "raw" changelog is available here.

Don't forget to download our brand new appliance!

Backup reports to Slack

Now, your backup reports can be send directly to slack!

It works also for MatterMost:

You can read more about XenServer backup reports to slack here.


A lot of small improvements in this version, to ease your everyday XenServer administration work.

Dedicated SR view

A new home view is available, dedicated to Storage Repositories (SRs). It's also available with the keyboard shortcut "g s":

SR are sorted by size by default, but you can change the order easily:

You can also expand lines to see the number of VDIs in the SR:

Obviously, you can select multiple SRs to make bulk actions. And finally, the SR status has 3 colors:

  • green: SR connected to all its hosts
  • orange: SR partially connected (one host at least but not all)
  • red: SR not connected to any hosts

This way, you can detect quickly if you have unexpected disconnected SRs!

Clickable tags

When you click on a tag, it will automatically send you to the home view and filter on it. Very useful to "group" your VMs or whatever objects.


The restore backup UI is now more unified and easier to use: we are displaying all VMs at once, regardless where they are stored. You can search and filter for any VM you want to restore.

Also, restored VMs are suffixed with the date of the backup.

Eg: you did a backup of a VM with the name Alpine Mini, November the 7th. The restored VM will have the name Alpine mini (2016-11-07).

Test plugins

Some plugins require configuration that could be hard to test easily. For example, sending emails for backup reports.

So we added a "Test plugin" button after an email field to send the test:

You'll receive an email:

Hi there,    The transport-email plugin for Xen Orchestra server seems to be working fine, nicely done :)    

In case of any error, it will be displayed.

Boolean filters

Want to search for shared Storage? You can use this syntax now: shared?. Same for auto_poweron? to check which VM will boot automatically if XenServer is rebooted.

Under the hood

We started to create a complete backend to report precisely all XenServer error. It's the first step toward better understanding of what's happening in some cases.

What's next?

In December, the final release of the year will deliver a very exciting new feature. Stay in touch ;)

NTP DoS Exploit Released — Update Your Servers to Patch 10 Flaws [feedly]

NTP DoS Exploit Released — Update Your Servers to Patch 10 Flaws

-- via my feedly newsfeed

A proof-of-concept (PoC) exploit for a critical vulnerability in the Network Time Protocol daemon (ntpd) has been publically released that could allow anyone to crash a server with just a single maliciously crafted packet. The vulnerability has been patched by the Network Time Foundation with the release of NTP 4.2.8p9, which includes a total of 40 security patches, bug fixes, and

Tuesday, November 22, 2016

Reporting False Positives with [feedly]

Reporting False Positives with

-- via my feedly newsfeed

Some users may not be aware, but you've been able to report false positives on for years.  I say that users may not be aware, because quite unintentionally, the feature wasn't very easy to find.

With today's rollout of version 5.1.1 of, hopefully, we've fixed that.

When visiting, upon logging in:

then clicking on your email in the same section after logging in, you will be taken to your User Preferences and information screen.

On the left side of the screen, you will see the different sections in your user account:

Including a new link at the bottom of the list for "False Positive".

The screen looks like this:

When you fill out this form and click submit, the pcap and description will enter directly into our analyst's queue for work, allowing us to process false positives quickly.

In a future version of the Snort site, we are going to tie this feature directly into, what we call, the "Analyst Console", here at Talos.  Allowing you to see the status of your false positive, as it is flowing through our system, automatically.  Allowing you to see when the rule will be fixed, and when it was released.  

In the meantime, please use this system for your FP reports, help us improve the feature!

Introducing the Docker Community Directory and Docker Community Slack [feedly]

Introducing the Docker Community Directory and Docker Community Slack

-- via my feedly newsfeed

Introducing the Docker Community Directory and Docker Community Slack

Today, we're thrilled to officially introduce the Docker Community Directory and Slack to further enable community building and collaboration. Our goal is to give everyone the opportunity to become a more informed and engaged member of the community by creating sub groups and channels based on location, language, use cases, interest in specific Docker-centric projects or initiatives.


Sign up for the Docker Community Directory and Slack


Docker Community Directory

Members who join the Docker Community Directory will benefit from the following:

  • Latest product updates and release notes
  • Targeted invitations and promo codes for Docker community events (DockerCon, Docker Summits, Meetups, Docker Partner events, trainings, workshops and hackathons)
  • Ability to participate in raffles for Docker Swag
  • Chance to get priority access to product betas
  • Opportunity to get involved as a user and/or customer reference, meetup organizer, mentor, speaker, etc.
  • Be listed on the Docker Community Directory without sharing your email (built in direct messaging system)
  • Access to the Docker Community Slack

The Docker Community Directory is a tool for community members to collaborate. Everyone should use it respectfully, with genuine and specific Docker-centric messages. It should not be used to send messages that could be qualified as spam or otherwise violate Docker's community code of conduct. We invite community members who think the directory is not being used properly by one member to reach out to with more information, so that we can address the situation accordingly. Those members not abiding by the community code of conduct and misusing the platform will be subject to a warning and potential removal from the directory.

Docker Community Slack

We launched the Docker Community Slack a few months ago, so that the Docker Team could easily collaborate with two key groups of amazing contributors: Docker Captains and Docker Meetup Organizers. After a few weeks of planning and figuring out the best way to proceed, we are now ready and excited to extend the invitation to the broader community!

Due to a high level of engagement, our Slack team reaches the 10,000 messages archive limit very quickly, you might not able to see older slack messages. Everyone interested in seeing or searching older messages can look at our Community Slack archive here:

How can I be invited to join?

Docker has set up a registration form. Provide the information required and a Slack invite will be sent to you. You can also contact if you have any questions or concerns.

What's the Docker Community Slack URL ?

Once you've received the invitation and joined our slack team, you can go to to login.

Who are the admins of the Docker Community Slack?

Karen Bajza (@kbajza), Lisa McNicol (@lisa.mcnicol), Sophia Parafina (@spara), Mano Marks (@mano), Jenny Burcio (@jenny), Sebastiaan van Stijn (@thajeztah) and Victor Coisne (@vcoisne) are the current administrators of the Docker Community Slack team and private channels. They are also in charge of monitoring compliance with the code of conduct. If you witness inappropriate behaviour, please reach out to one of us so that we can respond appropriately.

Who should I contact if I want to create a new public or private channel?

The Docker Community Team is the central authority for this Slack team. If you're interested in creating a new public or private channel for your topic of interest, language or location, please reach out to one of the administrators listed above.

What happens to the IRC and Gitter channels?

The IRC and Gitter channels will be closed down. We will post an announcement on those channels before closing, so that users have time to register for Slack. We have enabled gateway access for users that prefer using XMPP or IRC client. Instructions for connecting an XMPP or IRC client can be found on the Slack website.

Sign up for the #docker Community Directory and Slack to meet, share and learn!
Click To Tweet

The post Introducing the Docker Community Directory and Docker Community Slack appeared first on Docker Blog.

This $5 Device Can Hack your Password-Protected Computers in Just One Minute [feedly]

This $5 Device Can Hack your Password-Protected Computers in Just One Minute

-- via my feedly newsfeed

You need to be more careful next time while leaving your computer unattended at your office, as it cost hackers just $5 and only 30 seconds to hack into any computer. Well-known hardware hacker Samy Kamkar has once again devised a cheap exploit tool, this time that takes just 30 seconds to install a privacy-invading backdoor into your computer, even if it is locked with a strong password.

Oracle acquires DNS provider Dyn for more than $600 Million [feedly]

Oracle acquires DNS provider Dyn for more than $600 Million

-- via my feedly newsfeed

Yes, Oracle just bought the DNS provider company that brought down the Internet last month. Business software vendor Oracle announced on Monday that it is buying cloud-based Internet performance and Domain Name System (DNS) provider Dyn. Dyn is the same company that was hit by a massive distributed denial of service (DDoS) attack by the Mirai botnet last month which knocked the entire

Spammers using Facebook Messenger to Spread Locky Ransomware [feedly]

Spammers using Facebook Messenger to Spread Locky Ransomware

-- via my feedly newsfeed 

If you came across any Facebook Message with an image file (exactly .SVG file format) send by any of your Facebook friends, just avoid clicking it. An ongoing Facebook spam campaign is spreading malware downloader among Facebook users by taking advantage of innocent-looking SVG image file to infect computers. If clicked, the file would eventually infect your PC with the nasty Locky  

Monday, November 21, 2016

Introducing DevOps in Nashville at DevOpsDays BNA [feedly]

Introducing DevOps in Nashville at DevOpsDays BNA

-- via my feedly newsfeed

DevOps in Nashville

The Chef team was recently in Nashville as a silver sponsor at the first-ever DevOpsDays BNA (those who love airports will recognize the local 3 letter code). The event was held on November 10th and 11th, right downtown next to the honky tonks and Johnny Cash Museum. Most attendees were local to Tennessee, although some people did smartly use the conference as an excuse for a mini vacation.

It's a great time for a DevOps community to form in this region. I have been living in Nashville for about 3 1/2 years. Although I'm pretty new to Chef, my resume has had DevOps-flagged keywords for much longer. Anecdotally, I have seen a huge uptake this year in the amount of LinkedIn recruiters searching for DevOps practictioners in Nashville. More often than not, they're also asking for Chef skills specifically. Health-care startups, especially, are exploding in the area, adding to the enterprise-landscape with stalwarts such as HCA. Didn't know that Nashville is a huge hub for Healthcare companies? Now you do.

As an inaugural event, the presentations were geared towards people who are new to the concept of DevOps. What is it, why do I care, why do I want to make anyone else at my job care? Different ways of introducing DevOps as a cultural shift was a recurring theme. Hopefully, participants walked away from the event better understanding that when rolling out new processes and technology for high velocity projects, a cultural shift is just as important as new software.

The DevOps Ideal

Inclusion was a a term I heard repeated frequently throughout the 2 days of presentations. It was more prevalent than collaboration, a term that now feels like a buzzword in our industry. Many of us are good at collaborating well within our own team. However, if we aren't including people from outside our own area of specialty in our day-to-day activities, we aren't fully practicing DevOps.

One of my favorite ideas from the event was this: if you are sitting in a meeting and everyone is only from your team or your discipline, if everyone looks like you or is your age, or has been on the same project teams with you forever – then you may not be embracing the DevOps ideal of an un-siloed and inclusive communication culture.

This idea is not about formal meetings where you are forced to invite someone from project management so you can say you tried. To reiterate, having a weekly "DevOps Meeting" with a rep from each department is not DevOps. DevOps as a culture is more passive. It is a natural result of engaging others and building trust. It is hard to know who you can depend on until you give them the chance to come through.

Think about those whom you know best at work. You begin to anticipate what they are going to say and think–and that's not all bad. But if you can guess what they are going to say, then it is unlikely you will be exposed to a different view on a topic. Including people with a different approach or set of responsibilities can lead to someone identifying a problem or proposing an idea that will make the project better when the release gets pushed. Post-mortems are great; early catches are better.

HugOps and Trust

To me, the inclusion theme ties right back into the HugOps video that you may remember from ChefConf 2016.

Do you want to hug someone you don't know? Don't talk to? Don't like? Don't trust? Chef is approaching DevOps as an opportunity to meet and understand people whom you will eventually want to hug. Working together towards a goal builds trust. You need to include new people into your daily life so they can become someone huggable. And not just physical hugs, verbal ones too. The conversational hugs (i.e respect and trust) in a work environment may be the most important towards making progress on highly debated topics.

You may know that Chef is awesome technology. It will absolutely give you the ability to control an auditable and immutable infrastructure in order to make the compliance team feel comfortable with 10x daily automated production pushes. It's another thing to be able to communicate that in a way that people in your organization will be willing to listen.

Inclusive Conversations

The Chef Community Guidelines is a great place to start for direction on how to initiate inclusive conversations. This type of attitude will lead you closer to a culture of DevOps.

  • Be welcoming, inclusive, friendly, and patient.
  • Be considerate.
  • Be respectful.
  • Be professional.
  • Be careful in the words that you choose.
  • When we disagree, let's all work together to understand why.

DevOps Solutions from Chef

  • Learn about our DevOps workshops to assess your DevOps readiness, accelerate the integration of DevOps, and more
  • Download the 'Automation for DevOps'. This white paper focuses on the technical attributes of automation and the DevOps workflow and how they help you meet the demands of the digital economy
  • Read 'Foundation of Devops' article in our Skills Library

The post Introducing DevOps in Nashville at DevOpsDays BNA appeared first on Chef Blog.

Friday, November 18, 2016

ShapeBlue contributes native support for Kubernetes and Docker to Apache CloudStack [feedly]

ShapeBlue contributes native support for Kubernetes and Docker to Apache CloudStack

-- via my feedly newsfeed

Offers seamless Container-as-a-Service without disruption to user experience or business process

Sevilla, Spain — 17 November 2016 — ShapeBlue, the largest independent integrator of CloudStack technologies worldwide, today announced at ApacheCon Europe that it will be donating its CloudStack Container Service software to the Apache CloudStack project. The technology integrates CloudStack with Kubernetes and Docker to provide a seamless Container-as-a-Service (CaaS) offering within existing Infrastructure-as-a-Service (IaaS) environments with no disruption to user experience or business process.

"We are really excited to be handing over the code and IP of CloudStack Container Service to the CloudStack project as part of our ongoing commitment to open source," said Giles Sirett, CEO of ShapeBlue. "The CloudStack project is the best environment for others to build on the work we've done to date.

CloudStack Container Service is a plug-in for Apache CloudStack that enables users to create container clusters within an existing multi-tenant environment, provided by CloudStack. The user experience is seamless: users can both manage container clusters and deploy/manage cloud native applications in the same user-interface that they use to manage their existing compute, network, and storage. Service providers running dedicated or custom UIs benefit from a number of simple API calls that have been added to the CloudStack API to allow simple integration.

"We have focused on creating a seamless experience between CloudStack orchestrated infrastructure and Kubernetes orchestrated container environments to meet demand from our customers," explained Sirett.

The project began as a collaboration between ShapeBlue and Skippbox, providers of platforms and tools that ease the deployment and lifecycle management of cloud-native applications. It has been available for download since May 2016 under the Apache 2.0 license and, ultimately, will be moved under the governance of the Apache CloudStack project.

"The Kubernetes CloudStack plug-in has been used by a number of cloud service providers for some time, and we are now confident about its potential to be utilised for a number of other use-cases by the open source community," said Skippbox founder and CEO, Sebastien Goasguen. "Open sourcing it is the right thing to do, to help the community transition to a container world."

The software gives end-users the ability to use multiple container engines such as Docker or rkt from CoreOS, hosted container registries like Docker hub, Quay or Google Container Registry (GCE), as well as their own private registries. It provides this whilst overcoming the biggest challenge for existing IaaS providers: how to quickly offer their users a robust CaaS offering, but with a seamless user experience and no disruption of their existing IaaS business processes and commercial models.

Ian Rae, CEO of Cloudops, said "ShapeBlue's contribution allows service providers to offer "Containers-as-a-service" for their customers (similar to AWS ECS) based on Apache Cloudstack.  Their customers can now provision and manage containers on top of their cloud resources. CloudOps works with many open source cloud computing projects and we believe this contribution represents an important advancement in the capabilities of Apache CloudStack".

"The underlying framework that we have created can be easily used as a basis for integrating Docker swarm, Apache Mesos, Apache Hadoop, or any other cluster orientated platform," explained Sirett. "Adoption can be greatly accelerated by making this part of CloudStack itself, where the community can collaborate on further development."

"Supporting containers is a great step forward for our users given the current cloud computing landscape," said Will Stevens, Vice President of Apache CloudStack. "We appreciate ShapeBlue contributing this integration to the CloudStack community."

Further information on CloudStack Container Service is available at

About ShapeBlue

ShapeBlue are the largest independent integrator of CloudStack technologies globally and are specialists in the design and implementation of IaaS cloud infrastructures for both private and public cloud implementations. Services include IaaS cloud design, software engineering, CloudStack consulting, and training. The company has a global customer base with offices in London (UK), Mountain View (CA), Bangalore (India), Rio de Janeiro (Brazil), and Cape Town (South Africa). For more information, visit

Writing Elegant Tests [feedly]

Writing Elegant Tests

-- via my feedly newsfeed

You've probably found that the many tests you write for all your cookbooks require as much or more effort than maintaining the cookbooks themselves. You've also probably noticed that there's quite a bit of boilerplate code required to verify all the recipes, resources, and helpers. The consequence is that much of your test code is duplicated from one cookbook to another. In this webinar, Franklin Webber, Training and Technical Content Lead at Chef, will show you techniques that bring elegance to a cookbook's tests. You'll learn how to eliminate redundancy, rebuild common patterns into helpers, and extract those helpers into a portable library. Join to learn how to: - Refactor tests for more elegant code - Craft reusable testing resources and helpers - Extract testing resources into a Ruby gem Who Should Attend: - Anyone who writes tests for cookbooks

Chef on Chef: Extreme Dogfooding [feedly]

Chef on Chef: Extreme Dogfooding

-- via my feedly newsfeed

It's hard enough for a company to run its own services reliably - there are entire volumes of work written on how to do it. When you're a vendor to other groups in your own company, you add even more complexity when you dogfood software; the practice of testing and using the same software you ship to customers. At Chef, we decided to dogfood our own products along with several new technology components simultaneously. It was an educational experience! In this webinar, Chef Principal Engineer Seth Chisamore shares how his team learned to dogfood software for the Package Router project, which is a service used to distribute every product Chef Software Inc ships. Our software was still under development and the team used dogfooding not just as quality control but as a way to demonstrate their confidence in its stability. He'll talk about lessons learned and how the team leveraged several Chef products--including Habitat, InSpec, and Chef Automate--in tandem with a new technology stack to make sure our software is always ready to release to customers. Join us to learn: - The value of including automated tests as part of a build pipeline - How we use our own open-source and commercial products at Chef to serve up our software to the world - How to leverage new technologies safely and at velocity Who should attend: - Release engineers - DevOps engineers - Systems architects

Simplifying Container Management with Habitat [feedly]

Simplifying Container Management with Habitat

-- via my feedly newsfeed

Containers provide a delightful development experience. It's easy to download a container image and get started writing code. But it's a different story when you have to run containers in production at scale. That's when all the hidden complexities become apparent and the real challenges begin. What tools are you going to use to build, deploy, run, and manage your containerized applications? How are you going to manage difference between environments like staging and production with a fleet of immutable objects? How will you effectively scale containerized applications once you've deployed them? Habitat, our open-source project for application automation, simplifies container management by packaging applications in a compact, atomic, and easily auditable format that makes it easier to deploy your application on various container runtimes. Once your applications are deployed, the Habitat supervisor simplifies the complexities of running in production environments with built-in abstractions for functions typically handled by external tooling, such as dynamic scaling and rolling updates. Join Ian Henry and Michael Ducy on Friday, December 9th at 10:00 AM PT, to learn how Habitat makes building, deploying, and running your applications in containers simple, no matter how complex the production environment. Join us to learn: - Why automation is critical to deploying in a containerized world and how - Habitat provides the minimum viable automation. - Why a strong container build system is important, and how the Habitat studio provides that system. - Why Habitat is the easiest way to build and run containers at scale, no matter the underlying container architecture. Who should attend: - Anyone new to containers - Anyone challenged by running containers in production at scale

Why Habitat? – Plans and Packages, Part 1 [feedly]

Why Habitat? – Plans and Packages, Part 1

-- via my feedly newsfeed

Habitat is Chef's solution for application packaging and delivery: automation that travels with the application. This is the first in a multi-part series on the concepts behind Habitat.

Application Automation is a big topic and relies upon multiple services from packaging, to service discovery, to runtime supervision, and deployment topologies. As a starting point, it is important to understand what has changed with application packaging and delivery and how Habitat helps automate apps and in doing so improve the velocity of app delivery.

Why is Application Packaging Important Now?

For many years, there has been a shift to decouple applications from the infrastructure they run on. Businesses have shifted over time from physical environments, to virtual environments, to – most recently – containerized environments with the growth of Docker, CoreOS, Kubernetes and others. This shift highlights the traditional separation of concerns between Apps and OS, and allows us to question if those concerns are still valid in an automated world where the compute runtimes are treated as immutable artifacts. Or in other words, how can we decouple the application from the underlying operating system on which it runs.

Specifically, decoupling the App from the OS enables particular advantages:

  • We can define an app 'package' that completely contains (or has references to) all the dependencies for a given app, and confidently, repeatedly deploy that app at scale with consistent results.
  • We can avoid the hell of managing multiple apps (and competing dependencies) on one OS.
  • We can eliminate larger parts of the OS when the application deployment and management is automated

A simple and useful illustration of how containers focus on application packaging to achieve decoupling can be found in the Kubernetes documentation:

Similarly, Habitat solves this problem by creating reusable packages for libraries that need to be consumed by applications. These libraries can be mixed and matched depending on the needs of the given application. Of course, the operating system vendors have been shipping reusable packages of libraries for some time; however, the current method is broken from a couple of perspectives.

  • OS provided libraries have a release cadence based on the vendor's release cycle. If you require a newer version of a library, you'll need to compile and package the library yourself.
  • OS provided libraries often have tightly coupled dependencies on other parts of the operating system.
  • OS provided packages do not have a declarative interface for configuring the package at deploy or run time.

These problems have been around for sometime and are starting to be addressed by the OS vendors through things such as Ubuntu SNAP packages or Software Collections in the Red Hat world. These solutions are steps in the right direction, but by committing to one of these technologies, you're still declaring an explicit dependency to a specific operating system.

How Habitat Approaches Application Packaging

We've practiced this philosophy of decoupling applications from operating systems at Chef for a number of years. It is the underlying idea behind Chef's Omnibus Application Packaging technology. Omnibus, and now Habitat, can free you from having to move at the velocity of the operating system developers. Given that some operating system vendors prefer stability over bringing in newer libraries faster, coupling to the OS becomes foolhardy.

The above diagram also illustrates another point: when the application is responsible for its dependencies, the underlying operating system can be minimized greatly. You see this in the design of newer OSes such as CoreOS, but time and time again, we are still using distributions built for the old way in running our applications inside containers.

Habitat addresses this in a few ways:

  • Habitat packages work to package only the concerns of the given application or libraries which the package requires.
  • Habitat packages provide an area isolated from the OS to 1) store their provided files and libraries, and 2) to run the service.
  • Habitat plans provide isolation of the build dependencies and runtime dependencies. This ensures that requirements for building your application or libraries are not packaged with the code required to run your application.

New Tech, Old Approaches

As we move towards a more application centric view of the underlying libraries needed to run that application, what is required is a solution to easily help you package the application artifacts and the dependencies they require. If you take a look at the current container ecosystem around the official container images, you find that virtually all tightly couple the application runtime to the operating system by doing what we've all done for years, running apt-get install.

FROM debian:jessie    # add our user and group first to make sure their IDs get assigned consistently, regardless of whatever dependencies get added  RUN groupadd -r www-data && useradd -r --create-home -g www-data www-data
ENV HTTPD_PREFIX /usr/local/apache2  ENV PATH $HTTPD_PREFIX/bin:$PATH     RUN mkdir -p "$HTTPD_PREFIX" \     && chown www-data:www-data "$HTTPD_PREFIX"    WORKDIR $HTTPD_PREFIX
# install httpd runtime dependencies  #   RUN apt-get update \     && apt-get install -y --no-install-recommends \     libapr1 \     libaprutil1 \     libaprutil1-ldap \     libapr1-dev \     libaprutil1-dev \     libpcre++0 \     libssl1.0.0 \     && rm -r /var/lib/apt/lists/*  

What we need is a completely new way of thinking about how we build, package, and run our applications given the abilities containers give us. No longer do we need a bloated system filled with libraries intended for systems that users interact with. Rather we need a system that provides the bare bones for what an application needs to operate.

Another way to think about the problem is that Habitat provides a complete build system for your containers. Once you've created a package of your application, you can use the functionality provided in hab pkg export to easily export Habitat packages to a variety of different runtime environments (Docker, ACI, or Mesos). This export will provide the basic skeleton OS required to run your application, your declared dependencies. Also embedded in the container is the Habitat supervisor to run your application and provide the level of automation required to configure your application.

See this in Action

The problems Habitat is solving are varied and spread across many different concerns in your application and container lifecycle. Plans and Packages are a core component of beginning to effectively build, deploy, and manage your application. In our next blog post, we'll give you a walk through of how to package a Node.js application with Habitat, and compare this to how you would package the same application with something such as a Dockerfile.

Get started with Habitat

This introductory tutorial for Habitat shows you how to setup your environment, create a plan, build an artifact, configure it, and re-configure it during start-up.

The post Why Habitat? – Plans and Packages, Part 1 appeared first on Chef Blog.