Thursday, May 24, 2018

Automatically Generating InSpec Controls from Terraform

Automatically Generating InSpec Controls from Terraform
// Chef Blog

InSpec-Iggy, or "Iggy" for short, is a new plugin for InSpec that generates InSpec compliance profiles from Terraform .tfstate files (and eventually AWS CloudFormation and Azure Resource Manager templates). Iggy was originally inspired by Christoph Hartmann's inspec-verify-provision repository and the associated blog post on testing Terraform with InSpec. With the release of InSpec 2.0 and the addition of AWS and Azure support, automatically generating controls became much more feasible. Let's see a quick demo of how it works:

inspec terraform generate

This currently generates a set of InSpec Controls based on mapping Terraform to InSpec Resources. The output may be captured as a file (ie. "test.rb") and used from the command line with InSpec. The demo uses the Terraform Basic Two-Tier AWS Architecture and the following commands:

terraform apply  inspec terraform generate > test.rb  inspec exec test.rb -t aws://us-west-1

With the current versions of InSpec-Iggy (0.2.0) and InSpec (2.1.83) we get the following output:

$ inspec exec test.rb -t aws://us-west-1    Profile: tests from test.rb (tests from test.rb)  Version: (not specified)  Target:  aws://us-west-1      ✔  aws_ec2_instance::i-0ed224373e440f72b: Iggy terraform.tfstate aws_ec2_instance::i-0ed224373e440f72b       ✔  EC2 Instance i-0ed224373e440f72b should exist       ✔  EC2 Instance i-0ed224373e440f72b id should cmp == "i-0ed224373e440f72b"       ✔  EC2 Instance i-0ed224373e440f72b instance_type should cmp == "t2.micro"       ✔  EC2 Instance i-0ed224373e440f72b key_name should cmp == "mattray-tf"       ✔  EC2 Instance i-0ed224373e440f72b subnet_id should cmp == "subnet-fbc7f29c"    ✔  aws_security_group::sg-7770ba0f: Iggy terraform.tfstate aws_security_group::sg-7770ba0f       ✔  EC2 Security Group sg-7770ba0f should exist       ✔  EC2 Security Group sg-7770ba0f description should cmp == "Used in the terraform"       ✔  EC2 Security Group sg-7770ba0f vpc_id should cmp == "vpc-0eacdb69"    ✔  aws_security_group::sg-0a70ba72: Iggy terraform.tfstate aws_security_group::sg-0a70ba72       ✔  EC2 Security Group sg-0a70ba72 should exist       ✔  EC2 Security Group sg-0a70ba72 description should cmp == "Used in the terraform"       ✔  EC2 Security Group sg-0a70ba72 vpc_id should cmp == "vpc-0eacdb69"    ✔  aws_subnet::subnet-fbc7f29c: Iggy terraform.tfstate aws_subnet::subnet-fbc7f29c       ✔  VPC Subnet subnet-fbc7f29c should exist       ✔  VPC Subnet subnet-fbc7f29c availability_zone should cmp == "us-west-1a"       ✔  VPC Subnet subnet-fbc7f29c cidr_block should cmp == ""       ✔  VPC Subnet subnet-fbc7f29c vpc_id should cmp == "vpc-0eacdb69"    ✔  aws_vpc::vpc-0eacdb69: Iggy terraform.tfstate aws_vpc::vpc-0eacdb69       ✔  VPC vpc-0eacdb69 should exist       ✔  VPC vpc-0eacdb69 cidr_block should cmp == ""       ✔  VPC vpc-0eacdb69 dhcp_options_id should cmp == "dopt-d76783b2"       ✔  VPC vpc-0eacdb69 instance_tenancy should cmp == "default"    Profile Summary: 5 successful controls, 0 control failures, 0 controls skipped  Test Summary: 19 successful, 0 failures, 0 skipped  

inspec terraform extract

This currently reads the terraform.tfstate file and looks for tagged Resources and extracts commands for executing them against the machines. This is still under development, but the current demo provides the following:

$ inspec terraform extract -t terraform.tfstate  inspec exec -t ssh:// -i mattray-tf  inspec exec -t ssh:// -i mattray-tf  inspec exec -t aws://us-west-2  

which needs a small bit of tweaking but it works

inspec exec -t ssh://ubuntu@ -i mattray-tf  ...  Profile Summary: 5 successful controls, 8 control failures, 1 control skipped  Test Summary: 103 successful, 14 failures, 1 skipped  

Working with InSpec-Iggy

InSpec-Iggy is available through Rubygems, so you gem install inspec-iggy to get started now. If you want to get involved in development, there are further instructions on GitHub.

Writing InSpec Plugins

Writing InSpec plugins is not yet a documented feature, so I've written an example InSpec plugin and pushed it to Rubygems and GitHub if you would like to learn more.

The Future of Iggy

Chef has been working with a leading international banking group to automate cloud compliance for Singapore and Hong Kong. We've been gathering requirements and use-cases for integration of InSpec and Terraform and we welcome your feedback too. InSpec-Iggy is open source and Apache-licensed. Iggy is not yet 1.0, we want to build out stronger support for more Terraform resources and build a better inspec terraform extract experience. AWS CloudFormation is also under active development and Azure Resource Manager templates will follow a similar pattern. We look forward to your input, testing, and patches as we work to expand the InSpec coverage of all of your infrastructure and resources.

The post Automatically Generating InSpec Controls from Terraform appeared first on Chef Blog.


Read in my feedly

Sent from my iPhone

InSpec now available in Azure Cloud Shell

InSpec now available in Azure Cloud Shell
// Chef Blog

This week brings us into another delightful ChefConf! We've made a lot of great announcements about enhancements and features that have been added across our suite of automation tools, and in Chef Automate itself with Automate 2.0. We also announced that Chef Software is now even more tightly integrated with the Microsoft Azure platform. Users can now run InSpec natively as a part of the Azure Cloud Shell experience. This allows everyone using Cloud Shell to easily run InSpec compliance scans right from their browser!

Azure Cloud Shell allows you to connect to Azure using an authenticated, browser-based shell experience that's hosted in the cloud and accessible from virtually anywhere. Azure Cloud Shell is assigned per unique user account and automatically authenticated with each session. You get a modern command-line experience from multiple access points, including the Azure portal,, Azure mobile app, Azure docs (e.g., Azure CLI 2.0), and the VS Code Azure Account extension.

Using InSpec in Azure Cloud Shell is super easy! Just call inspec from the bash prompt, and you're on your way!

InSpec is able to leverage Azure Managed Service Identity system that's baked into Cloud Shell to give you instantaneous access to your Azure Resources in any subscription you have access to. All the examples in this blog can found on GitHub at:

In the following use cases we've exported our subscription ID to an environment variable.

To scan a resource group in your subscription, just call "inspec exec [your profile] -t azure://[your subscription id]" with an Azure resource profile.

In this example we first scan for a resource group which we have the wrong name for, so our tests fail. When we provide the correct resource group name we get our other results back.

The next example shows a more detailed scan of a VM resource in a resource group:

We scan the system here for several different VM resource attributes, so that we can verify our deployment is configured to the specifications our team requested. The results of the InSpec scan show that we've got some changes to make to this VM resource to get it into compliance.

Finally, this example shows you can still use InSpec in Cloud Shell to do remote scans on systems in your environment by providing the appropriate credentials for a machine.

Here we run the DevSec Linux Baseline against our Ubuntu 16.04 VM. This is an empty VM, and it could use some remediation with a Chef cookbook.

Get Started

You can get running with Azure Cloud Shell today by visiting!

We hope you enjoy using InSpec inside of Azure Cloud Shell! We'll be looking to add other tools into Cloud Shell in the near future.

Learn More

To learn more about how to use InSpec and Azure together, check out these resources:

The post InSpec now available in Azure Cloud Shell appeared first on Chef Blog.


Read in my feedly

Sent from my iPhone

Chef Deepens Support for Google Cloud Platform

Chef Deepens Support for Google Cloud Platform
// Chef Blog

Building on the work we announced last fall to help you provision GCP resources with Chef cookbooks, and in honor of ChefConf 2018, Chef and Google Cloud Platform (GCP) have been working together in several exciting ways:

Let's take a deeper look at each of these new developments.

InSpec integration with GCP

In an increasingly complex regulatory environment, many DevOps teams and information security officers struggle to answer important questions:

  • Is our infrastructure deployed and configured as it should be?
  • Can we prove that our deployments are compliant with a growing list of guidelines (CIS, PCI, SOX, HIPAA etc.)?

InSpec by Chef helps you express security and compliance requirements as code and incorporate it directly into the delivery process, eliminating ambiguity and manual processes to help you ship faster while remaining secure.

GCP continues to introduce new ways to protect and control your GCP services and data. This has made it a popular platform for high-profile customers like major motion picture studios, which use GCP for security sensitive workloads such as rendering pipelines for digital assets.

Now InSpec users can continuously test their Google Cloud deployments (regardless of what tool they have used to provision and configure them) for issues like whether a firewall should allow HTTP traffic or whether a storage bucket should be open to the world.

Further, Chef and Google are developing a recommended baseline InSpec profile for securing GCP resources, and will incorporate access to InSpec into Google Cloud Security Command Center for ease of use straight from the Google Cloud Console.

Google Container Registry support in Habitat

Habitat by Chef delivers application automation that helps modern application teams build, deploy, and manage any application in any environment—from traditional data-centers to containerized microservices. In December 2017 Chef announced support for running Habitat applications on Google Kubernetes Engine, to publish your containers via Docker Hub. Learn more about this at the session "How the Habitat-operator Brings Habitat Awesomeness to Kubernetes" on May 23rd at 4:00 p.m. at ChefConf.

Later this summer, Habitat users will be able to build their applications and directly publish these artifacts into Google Container Registry. This integration of Habitat with Container Registry and Kubernetes Engine will enable customers to refactor and re-architect their apps into modern containerized architectures as part of their migration efforts onto GCP.

Provision more GCP resources with Chef

In 2017, we released Chef cookbooks to provision and configure the following GCP services:

Recently, we've also added coverage for the following services:

You can download these individually via Chef Supermarket, or get them all together here.

See you at the show

If you'll be at ChefConf, we'd also love to see you at the Google booth during the event. You can attend the "Let's use Google Cloud Platform (GCP) and Chef" session at 2:00 p.m. on May 24th to learn about using Chef together with GCP's suite of services.

The post Chef Deepens Support for Google Cloud Platform appeared first on Chef Blog.


Read in my feedly

Sent from my iPhone

Announcing Habitat Builder on-premises and expanded ecosystem integrations

Announcing Habitat Builder on-premises and expanded ecosystem integrations
// Chef Blog

We are excited to be making a number of Habitat-related product announcements today at ChefConf 2018. Over the last couple years, our customers have adopted Habitat for two main scenarios: lift, shift, and modernize legacy applications into the cloud or containers, and accelerating adoption of containers for new applications as they move into wider deployment of technologies like Kubernetes. This week's product announcements have a direct connection to these use cases.

First, we are announcing the general availability of Habitat depot behind the firewall. This helps all Habitat users, but particularly those who have legacy or proprietary business applications that cannot be built and published through the Habitat Builder SaaS. Over the last six months we have been working with a small group of customer development partners to bring this capability to life. You can read more about its features here.

Secondly, we are making a number of announcements related to Habitat integrations into the cloud-native and container ecosystem:

  • The newly-updated Habitat Operator for Kubernetes bridges the standard management interface of Habitat services with the Operator model of Kubernetes for container maintenance. It is the recommended way to operate Habitat packages inside Kubernetes and also obviates developers from having to write their own operators for each and every application they deploy into Kubernetes.
  • Habitat Builder can now publish directly to Azure Container Registry (ACR) allowing for one-click continuous deployment of even the most sophisticated applications into Azure Kubernetes Service (AKS). We launched this a few weeks ago at Microsoft Build, so you can read more about it here. Be sure to attend our webinar in a couple of weeks where we will demonstrate how Habitat's build once, run anywhere approach allows you to deliver the same application to Azure Compute Service and AKS with no additional work.
  • The Open Service Broker (OSB) standard, originally created by Pivotal, allows you to bridge applications and services running on different clouds and platforms. We're thrilled to announce a Habitat OSB reference implementation so you can build and ship these packages once, whether they are running on a containerized environment or not.
  • Helm chart integration allows you to export your Habitat-built packages into Helm.
  • Finally, you can now track Habitat application health using an integration with Splunk's HTTP Event Collector (HEC).

Habitat at ChefConf

It's hard to believe that Habitat is not even two years old and yet we have dozens of customers – many of whom are here this week at ChefConf – telling their stories about how Habitat is helping them streamline their development processes and achieve one way to production for applications, no matter what their vintage. You'll hear many of these stories on the main stage. But don't miss some of the breakout sessions where customers are sharing practical lessons learned and problems they are solving with Habitat. And, as usual, you can visit to get started with Habitat.

The post Announcing Habitat Builder on-premises and expanded ecosystem integrations appeared first on Chef Blog.


Read in my feedly

Sent from my iPhone

Introducing Chef Workstation

Introducing Chef Workstation
// Chef Blog

We're excited to announce the release of Chef Workstation, providing everything you need to get started with Chef with a simple one-click installation.

Ad-Hoc Configuration Management with chef-run

Chef Workstation comes with the new chef-run utility, which can be used to execute chef code on any remote system accessible via SSH or WinRM. This provides a quick way to apply config changes to the systems you manage whether or not they're being actively managed by Chef, without requiring any pre-installed software. With chef-run, you can execute individual resources, or pre-existing Chef recipes on any number of servers with a single, simple command.

In the simple example above, we see chef-run used in tandem with InSpec, Chef's compliance automation framework. First, InSpec is checking to see whether our host is configured with the ntp package installed, which is responsible for ensuring server clocks are kept in sync. Since our InSpec profile is reporting a failure, we then use chef-run to install ntp using Chef's package resource, like so:

chef-run -i ~/path/to/sshkey user@host package ntp action=install

Finally, we re-run the previously failing InSpec profile for immediate validation that our update was successfully applied.

Robust Testing & Development Tools

Chef Workstation also includes everything already packaged within the ChefDK. Development tools for testing, dependency resolution, and cookbook generation are all included alongside chef-run, ensuring that whether you're consuming existing chef policies, or creating your own, you have everything you need to get up and running quickly.

Get Started Now

The post Introducing Chef Workstation appeared first on Chef Blog.


Read in my feedly

Sent from my iPhone

Chef Automate 2 – A Modern Platform for Continuous Automation

Chef Automate 2 – A Modern Platform for Continuous Automation
// Chef Blog

We are delighted to announce the general availability of Chef Automate 2, a major upgrade to our continuous automation platform. Chef Automate 2 is the culmination of a nine month re-architecture initiative to improve performance, scale and responsiveness. A refreshed UI allows you to see all infrastructure and compliance events in one interface and most importantly, isolate & debug failures. Building on the new capabilities in InSpec 2, we've enhanced the compliance features in Chef Automate to bring in cloud and network device scanning and made it easier to manage custom profiles. Finally, a true platform API in Chef Automate 2 allows fine-grained data access control and makes possible new integrations with our many partners including ServiceNow, Splunk, Google Cloud Platform, HP Enterprise, and others joining us this week in Chicago.

Install Chef Automate 2 and start a 60 day trial!

Enhanced operational visibility and debugging

Chef Automate 2 provides new tools and visualizations to help users gain the actionable insights they need to detect and correct problems faster. A streaming event feed displays every action taken and helps identify issues. Improved querying capabilities allow for easier and more insightful drill-down into infrastructure and compliance events to uncover the source of problems.

Compliance scanning and reporting in any environment

Since last year's ChefConf, Chef Automate has added significant compliance capabilities to detect and report on issues covering a wide range of environments, compliance benchmarks, and use cases. Chef Automate 2 continues that trend to extend to the cloud and network devices by taking advantage of the latest innovations in InSpec. Chef Automate 2 supports compliance scanning and reporting in AWS, Azure, and Google Cloud Platform environments, as well as against Cisco IOS network devices. This helps organizations take advantage of a single platform to test and secure their entire fleet.

Re-architected for speed and flexibility

Our customers put Chef Automate to the test in demanding, large scale, mission critical environments every day. Over the past year our engineering team has worked closely with customers to ensure Chef Automate meets their demands and is ready to take on the next set of challenges headed this way, including automating fleet sizes of tens of thousands of nodes. Chef Automate 2 features a modern UI built on top of an API-driven microservices architecture, which allows for dramatically faster performance, scale, and true integration points for customers and partners.

Moving forward

Take Chef Automate for a 60-day trial by visiting Current Chef Automate customers can take advantage of in-place upgrades with automatic data migration from Chef Automate 1.x. For more information, please visit:

The post Chef Automate 2 – A Modern Platform for Continuous Automation appeared first on Chef Blog.


Read in my feedly

Sent from my iPhone

Chef DK 3.0 Released

Chef DK 3.0 Released
// Chef Blog

Today we're delighted to announce the release of Chef DK 3.0. With this release, you can detect and correct issues across more platforms faster than ever before with the addition of Chef 14 and InSpec 2.

Chef 14

Chef 14 brings with it a variety of performance and workflow improvements, as well as nearly thirty new resources native to the Chef DSL. This includes better built-in support for Windows and MacOS management, as well as native management of Red Hat Systems Manager (RHSM) within Chef. For more details on what's new in Chef 14, be sure to check out our release announcement and webinar.

InSpec 2

InSpec 2 introduced the ability to scan more than just servers, with the ability to connect directly to cloud APIs to validate that servers and services alike are configured securely. This release includes resources for Microsoft Azure and Amazon Web Services so that as you take advantage of your cloud vendor's utilities, you can validate their compliance with the same ease and rigor as with homegrown solutions on traditional infrastructure. Combine that ability with performance improvements, and new resources for validating everything from SQL to IIS to Docker containers, and you have the most robust InSpec ever at your fingertips in Chef DK 3! Find out more in our release announcement and InSpec 2.0 Webinar.

Get Chef DK 3 Today

Get hands-on with Chef DK 3.0, as well as past releases, by downloading the installer for your OS from

What's Next?

The post Chef DK 3.0 Released appeared first on Chef Blog.


Read in my feedly

Sent from my iPhone

Happy Birthday Learn Chef Rally!

Happy Birthday Learn Chef Rally!
// Chef Blog

We're excited to celebrate the 1-year anniversary of Learn Chef Rally! It's been a great year for learning Chef, and as part of the celebration, we'll be releasing a limited edition badge at ChefConf next week. I'm not going to show it to you today, but imagine a magical feline celebrating, and you are getting close. The badge will be available to any registered learner that completes a module between May 22 – 25, 2018.

Learn Chef Rally by the numbers

In this first year, 20,000 Chefs have created an account on Learn Chef Rally to track their progress and earn badges. Thousands more are using Learn Chef Rally anonymously. Overall, Chefs have completed more than 25,000 learning modules.

Speaking of badges, more than 10,000 have been awarded and we were excited to see so many of you join the fun. There are currently 18 badges available to earn, in addition to some occasional limited edition badges. Here's a sample of popular badges:

New content every month

Back in September 2017, Thomas Petchel blogged about celebrating 10K registered users with highlights of popular content. We've been adding new content every month since then, so if it's been awhile, I suggest you visit the site again to discover new content and site improvements.

Show off your progress

If you like earning badges and documenting your progress, you should create an account and login whenever you start a new module. You'll be able to see which tracks and modules you've completed, your progress in unfinished tracks and modules, the badges you've collected, and other accomplishments. You'll also be notified when there's new material available.

Learn Chef Rally SWAG Store

As you complete tracks and hit specific milestones in Learn Chef Rally, you'll receive an email with a link to choose a gift from our SWAG store – sunglasses, bottle openers, and notebooks, just to name a few of your options.

Get started on Learn Chef Rally

I encourage you to make your way to Learn Chef Rally soon and sign up for an account before the next badge is released next week. You'll find a lot of helpful content for developing your Chef, Habitat, InSpec, and/or DevOps skills. If you see a gap, please let us know at

The post Happy Birthday Learn Chef Rally! appeared first on Chef Blog.


Read in my feedly

Sent from my iPhone

Chef Open Source Community News – April 2018

Chef Open Source Community News – April 2018
// Chef Blog

Here's what happened in the Chef, Habitat and InSpec open-source communities over the last month.


The biggest news from the Chef project is that we released Chef 14, a faster and easier to use Chef. We've already covered all the major changes in a couple of blog posts and a webinar so we won't delve into it again here. We are also on track to release ChefDK 3 by the end of April, which will include both Chef 14 and InSpec 2. Finally, on April 30, we bid a fond farewell to Chef 12, which becomes end-of-life.


We released Habitat 0.55.0 near the end of March. This release has a number of new features. Most important is an upgrade to the Launcher, the process manager used by the Habitat supervisor. By design, the Launcher rarely changes unless there are major improvements or bugfixes, so this is a good time to update. The Habitat Builder also gained secrets support so you can inject sensitive information like database passwords or license keys into the build process.

Finally, authorization to Habitat Builder will no longer work using the old GitHub authentication tokens. You should instead generate a Habitat personal access token as mentioned in last month's update. If you are suddenly getting authorization errors interacting with Habitat Builder, this is why.


The InSpec team released several new versions over the last month, primarily to add AWS-related resources and enhance parameters that can be matched inside existing resources. We highly recommend upgrading to InSpec 2.1.54 if you are developing InSpec profiles for AWS cloud compliance. The following is a short list of new AWS resources released over the last month:










Notable enhancements include the ability to test an AWS account's root user for presence of a hardware or virtual multi-factor authentication (MFA) device, as well as a significant expansion of the interface for fully testing AWS security group egress and ingress rules and the network configuration of your VPC.


We released Test Kitchen 1.21 which moves to using kitchen.yml as the configuration file name rather than .kitchen.yml (with the leading period) for better consistency with the rest of Chef's tools. The older name is still supported for backwards compatibility.

Community MVP

Every month we reward outstanding contributions from one of our community members across the Chef, Habitat and InSpec projects. These contributions do not necessarily need to include code; we want to recognize someone who has dedicated significant time and energy to strengthening our community.

This month we'd like to recognize Romain Sertelon, who is a tireless contributor to the Habitat community. Romain has submitted dozens of pull requests in the last year and provided countless hours of support in Slack. Recently, Romain contributed a substantial amount of work towards the core plans refresh without which would have taken twice as long. The support and code he gives to the community, the thoughtful comments he provides into the request-for-comments (RFC) process, and everything else he does embodies what a stellar community member looks like. In short, Romain works his ass off and the entire Habitat team would like to make sure he knows he is valued by the project.

Thanks for everything, Romain, and we'll be in touch to get your mailing address so we can send you a special token of our appreciation.

The post Chef Open Source Community News – April 2018 appeared first on Chef Blog.


Read in my feedly

Sent from my iPhone

Thursday, April 26, 2018

Finding Packages for Kali Linux

Finding Packages for Kali Linux
// Kali Linux

In an earlier post, we covered Package Management in Kali Linux. With the ease of installation that APT provides, we have the choice amongst tens of thousands of packages but the downside is, we have tens of thousands of packages. Finding out what packages are available and finding the one(s) we want can be a daunting task, particularly for newcomers to Linux. In this post, we will cover three utilities that can be used to search through the haystack and help you take advantage of the vast open source ecosystem.


Of the various interfaces available to search for packages, apt-cache is the most basic and rudimentary of them all. However,
it is also the interface we tend to use most often because it is fast, easy, and efficient. By default, apt-cache searches for a given term in package names as well as their descriptions. For example, knowing that all Kali Linux metapackages include 'kali-linux' in their names, we can easily search for all of them.

root@kali:~# apt-cache search kali-linux
kali-linux - Kali Linux base system
kali-linux-all - Kali Linux - all packages
kali-linux-forensic - Kali Linux forensic tools
kali-linux-full - Kali Linux complete system
kali-linux-gpu - Kali Linux GPU tools
kali-linux-nethunter - Kali Linux Nethunter tools
kali-linux-pwtools - Kali Linux password cracking tools
kali-linux-rfid - Kali Linux RFID tools
kali-linux-sdr - Kali Linux SDR tools
kali-linux-top10 - Kali Linux Top 10 tools
kali-linux-voip - Kali Linux VoIP tools
kali-linux-web - Kali Linux webapp assessment tools
kali-linux-wireless - Kali Linux wireless tools

In many cases, apt-cache returns far too many results because it searches in package descriptions. The searches can be limited to the package names themselves by using the –names-only option.

root@kali:~# apt-cache search nmap | wc -l
root@kali:~# apt-cache search nmap --names-only
dnmap - Distributed nmap framework
fruitywifi-module-nmap - nmap module for fruitywifi
nmap-dbgsym - debug symbols for nmap
python-libnmap - Python 2 NMAP library
python-libnmap-doc - Python NMAP Library (common documentation)
python3-libnmap - Python 3 NMAP library
libnmap-parser-perl - parse nmap scan results with perl
nmap - The Network Mapper
nmap-common - Architecture independent files for nmap
zenmap - The Network Mapper Front End
nmapsi4 - graphical interface to nmap, the network scanner
python-nmap - Python interface to the Nmap port scanner
python3-nmap - Python3 interface to the Nmap port scanner

Since apt-cache has such wonderfully greppable output, we can keep filtering results until they're at a manageable number.

root@kali:~# apt-cache search nmap --names-only | egrep -v '(python|perl)'
dnmap - Distributed nmap framework
fruitywifi-module-nmap - nmap module for fruitywifi
nmap - The Network Mapper
nmap-common - Architecture independent files for nmap
nmap-dbgsym - debug symbols for nmap
nmapsi4 - graphical interface to nmap, the network scanner
zenmap - The Network Mapper Front End

You can further filter down the search results but once you start chaining together a few commands, that's generally a good indication that it's time to reach for a different tool.


The aptitude application is a very close cousin of apt and apt-get except it also includes a very useful ncurses interface. It is not included in Kali by default but it can quickly be installed as follows.

root@kali:~# apt update && apt -y install aptitude

After installation, running aptitude without any options will launch the ncurses interface. One of the first things you will notice is that you can quickly and easily browse through packages by category, which greatly helps with sorting through the thousands of available packages.

aptitude tools by category

To search for a package, either press the / character or select 'Find' under the 'Search' menu. As you enter your query, the package results will be updated dynamically.

searching for a package in aptitude

Once you've located a package of interest, you can mark it for installation with the + character or to remove/deselect it, the character.

package marked for installation

At this point, you can keep searching for other packages to mark for installation or removal. When you're ready to install, press the g key to view the summary of the actions to be taken.

aptitude summary screen

If you're satisfied with the proposed changes, press g again and aptitude will complete the package installations as usual.

The Internet

If you want to restrict your searches to tools that are packaged by the Kali team, the easiest way to do so is probably by using the Google site search operator.

Google search for Kali packages

Learn More

Hopefully, this post will help you answer whether or not a certain tool is available in Kali (or Debian). For a much more detailed treatment of package management, we encourage you to check out the Kali Training site.


Read in my feedly

Sent from my iPhone

Docker for Desktop is Certified Kubernetes

Docker for Desktop is Certified Kubernetes
// Docker Blog

Certified KubernetesCertified Kubernetes

"You are now Certified Kubernetes." With this comment, Docker for Windows and Docker for Mac passed the Kubernetes conformance tests. Kubernetes has been available in Docker for Mac and Docker for Windows since January, having first being announced at DockerCon EU last year. But why is this important to the many of you who are using Docker for Windows and Docker for Mac?

Kubernetes is designed to be a platform that others can build upon. As with any similar project, the risk is that different distributions vary enough that applications aren't really portable. The Kubernetes project has always been aware of that risk – and this led directly to forming the Conformance Working Group. The group owns a test suite that anyone distributing Kubernetes can run, and submit the results for to attain official certification. This test suite checks that Kubernetes behaves like, well, Kubernetes; that the various APIs are exposed correctly and that applications built using the core APIs will run successfully. In fact, our enterprise container platform, Docker Enterprise Edition, achieved certification using the same test suite  You can find more about the test suite at

This is important for Docker for Windows and Docker for Mac because we want them to be the easiest way for you to develop your applications for Docker and for Kubernetes. Without being able to easily test your application and importantly its configuration, locally you always risk the dreaded "works on my machine" situation.

With this announcement following hot on the heels of the release last week of Docker Enterprise Edition 2.0 and it's Kubernetes support, it's perfect timing for us to be heading off to KubeCon EU in Copenhagen next week. If you're interested in why we're adopting Kubernetes, or in what we have planned next, please come find us on booth #S-C30, sign up for the Docker and Kubernetes workshop or attend the following talks.

Can't make it to KubeCon Europe? DockerCon 2018 in San Francisco is another great opportunity for you to learn about integration between Kubernetes and the Docker Platform.

#Docker for Desktop (Mac and @windows) is Certified @Kubernetesio
Click To Tweet

Useful Links:

The post Docker for Desktop is Certified Kubernetes appeared first on Docker Blog.


Read in my feedly

Sent from my iPhone

Using Ansible and Ansible Tower with shared roles

Using Ansible and Ansible Tower with shared roles
// Ansible Blog

Roles are an essential part of Ansible, and help in structuring your automation content. The idea is to have clearly defined roles for dedicated tasks. During your automation code, the roles will be called by the Ansible Playbooks.

Since roles usually have a well defined purpose, they make it easy to reuse your code for yourself, but also in your team. And you can even share roles with the global community. In fact, the Ansible community created Ansible Galaxy as a central place to display, search and view Ansible roles from thousands of people.

So what does a role look like? Basically it is a predefined structure of folders and files to hold your automation code. There is a folder for your templates, a folder to keep files with tasks, one for handlers, another one for your default variables, and so on:

tasks/   handlers/   files/   templates/   vars/   defaults/   meta/  

In folders which contain Ansible code - like tasks, handlers, vars, defaults - there are main.yml files. Those contain the relevant Ansible bits. In case of the tasks directory, they often include other yaml files within the same directory. Roles even provide ways to test your automation code - in an automated fashion, of course.

This post will show how roles can be shared with others, be used in your projects and how this works with Red Hat Ansible Tower.

Share Roles via Repositories

Roles can be part of your project repository. They usually sit underneath a dedicated roles/ directory. But keeping roles in your own repository makes it hard to share them with others, to be reused and improved by them. If someone works on a different team, or on a different project, they might not have access to your repository - or they may use their own anyway. So even if you send them a copy of your role, they could add it to their own repository, making it hard to exchange improvements, bug fixes and changes across totally different repositories.

For that reason, a better way is to keep a role in its own repository. That way it can be easily shared and improved. However, to be available to a playbook, the role still needs to be included. Technically there are multiple ways to do that.

For example there can be a global roles directory outside your project where all roles are kept. This can be referenced in ansible.cfg. However, this requires that all developer setups and also the environment in which the automation is finally executed have the same global directory structure. This is not very practical.

When Git is used as the version control system, there is also the possibility of importing roles from other repositories via Git submodules, or even using Git subtrees. However, this requires quite some knowledge about advanced Git features by each and everyone using it - so it is far from simple.

The best way to make shared roles available to your playbooks is to use a function built into Ansible itself: by using the command ansible-galaxy , ansible galaxy can read a file specifying which external roles need to be imported for a successful Ansible run: requirements.yml. It lists external roles and their sources. If needed, it can also point to a specific version:

# from GitHub  - src:   # from GitHub, overriding the name and specifying a tag   - src:     version: master     name: nginx_role   # from Bitbucket   - src: git+     version: v1.4 # from galaxy   - src: yatesr.timezone  

The file can be used via the command ansible-galaxy. It reads the file and downloads all specified roles to the appropriate path:

ansible-galaxy install -r roles/requirements.yml   - extracting nginx to /home/rwolters/ansible/roles/nginx   - nginx was installed successfully   - extracting nginx_role to   /home/rwolters/ansible/roles/nginx_role   - nginx_role (master) was installed successfully   ...  

The output also highlights when a specific version was downloaded. You will find a copy of each role in your roles/ directory - so make sure that you do not accidentally add the downloaded roles to your repository! The best option is to add them to the .gitignore file.

This way, roles can be imported into the project and are available to all playbooks while they are still shared via a central repository. Changes to the role need to be made in the dedicated repository - which ensures that no light-minded and project specific changes are done in the role.

At the same time the version attribute in requirements.yml ensures that the used role can be pinned to a certain release tag value, commit hash, or branch name. This is useful in case the development of a role is quickly moving forward, but your project has longer development cycles.

Using Roles in Ansible Tower

If you use automation on larger, enterprise scales you most likely will start using Ansible Tower sooner or later. So how do roles work with Ansible Tower? In fact - just like mentioned above. Each time Ansible Tower checks out a project it looks for a roles/requirements.yml. If such a file is present, a new version of each listed role is copied to the local checkout of the project and thus available to the relevant playbooks.

That way shared roles can easily be reused in Ansible Tower - it is built in right from the start!

Best Practices and Things to Keep in Mind

There are a few best practices around sharing of Ansible roles that make your life easier. The first is the naming and location of the roles directory. While it is possible to name the directory any way via the roles_path in ansible.cfg, we strongly recommend to stick to the directory name roles, sitting in the root of your project directory. Do not choose another name for it or move it to some subdirectory.

The same is true for requirements.yml: have one requirements.yml only, and keep it at roles/requirements.yml. While it is technically possible to have multiple files and spread them across your project, this will not work when the project is imported into Ansible Tower.

Also, if the roles are not only shared among multiple users, but are also developed with others or not by you at all, it might make sense to pin the role to the actual commit you've tested your setup against. That way you will avoid unwanted changes in the role behaviour.

More Information

Find, reuse, and share the best Ansible content on Ansible Galaxy.

Learn more about roles on Ansible Docs.


Read in my feedly

Sent from my iPhone

Unveiling Microsoft Server 2019 – Here’s Everything You Should Know About It

Unveiling Microsoft Server 2019 – Here's Everything You Should Know About It
// StarWind Blog

No ratings yet.

We live in an age where our technology needs are ever evolving. Right now we may need things we didn't even know existed until a few weeks back because every little feature or upgrade brings massive value into our everyday lives. The same is the case with the preview release of Microsoft Server 2019. Let's take a closer peek at what features Microsoft has in store to serve our evolving needs with its new release.

Microsoft is all set to launch its new and improved Windows Server 2019 in the second half of the year 2018 that claims to be a super-enhanced version of Windows Server 2016. Although, Windows Server 2016 is the widely used version in Windows Server generation. They have released the first build of the preview version of their new operating system. The new server comes with the promise of many new features but the desktop-like experience, Hyper-converged infrastructure, hybrid cloud, platform applications and security remain the priority areas for this newer Windows where they have brought in mentionable enhancements.

The Desktop Experience

The desktop-like Windows Server experience is probably the most exciting thing for this release which was missing in an earlier version of Semi-Annual Channel versions. The previous Semi-Annual Channel releases were GUI-less and only supported Server Core and Nano configurations. Owing to the LTSC release by Microsoft, tech gurus, and IT buffs will be able to utilize a desktop GUI for Windows Server 2019 hand-in-hand with the GUI-less interface of the Server Core and Nano configuration.

Hybrid Cloud Goal and Project Honolulu

Hybrid clouds are the buzz now, so Microsoft has kept the tradition alive and made it a priority functionality area, aiming for Hyper-V virtual machines to connect and live migrate between Azure and on-premises Hyper-V hosts. One of the predictions that are going around with regards to the new Windows Server 2019 is that Microsoft will do away with the Server manager to bring in a more efficient web-based server management tool – Project Honolulu. Eventually, the tool will provide the necessary support to manage on-premises and cloud resources both via a single interface. This product evolution is all too familiar to us because we have previously seen PowerShell metamorphosing from a basic support tool to a unified management tool.

Enhanced Security

With huge investment in the security features of Windows Server 2016, Microsoft has gone much further in the rollout of the new Windows Server 2019. Not only has Microsoft made security enhancements like Shielded VM support for Linux VMs, but it has also added a built-in tool namely the "Windows Defender Advanced Threat Protection" with the purpose of detecting, assessing security breaches and potential malicious attacks along with responding to them by automatically alerting users and blocking them. The inclusion of the Windows Defender ATP in Windows Server 2019 will help users prevent security compromises using deep Kernel, memory sensors, data storage, security integrity components and network transport.

Platform Management and Linux Interoperability

When it comes to Platform Applications, Windows Server 2019 has brought in some remarkable upgrades to make it efficient and hassle-free. The base container image has been reduced to third of its size to decrease the images' download time by a whopping 72 percent. There are many speculations around the topic if the new server will or won't support the Kubernetes. But like most other clusters, Kubernetes will be supported by the new server and will go through some upgrades in the coming quarter as well. Another key area in Platform Applications will be to improve the overall user journey in terms of engagement and navigation. The navigating environment experience will be improved by allowing Linux users to utilize industry standards like Tar and Curl to bring Linux scripts into Windows.

Enterprise-Grade HCI

HCI (Hyper-converged infrastructure) combines storage, compute, and networking in a software-driven appliance and users of x86 servers now realize what the worth of HCI really is. It has indeed become the most demanded trend in today's server industry. The HCI market has shown astonishing growth of 64% according to IDC and some experts including Gartner have said that by 2019, it will become a $5 billion market. The integration of Windows Server 2019 with Project Honolulu has made HCI deployment management more scalable, reliable, and efficient, consequently making the management of everyday activity on HCI environments much easier and simpler.

Wrapping It Up

Windows Server 2019 is all set to be launched soon for the general public but if you are a potential user or an IT buff who can't wait to get his hands on it, join the Windows Insider program for preview and tool testing. It will give you a fair idea of what to expect from the upcoming launch.

Microsoft would love to hear from you! Use the Windows 10 Insider device and the Feedback Hub application to share your views on the preview. When you enter the app, select the Server category, relevant subcategory followed by entering your build number to provide your feedback. You can also visit the Windows Server space in the Tech community to share your views and learn from the best minds.


Please rate this


Read in my feedly

Sent from my iPhone

Advanced Exploitation: How to Find & Write a Buffer Overflow Exploit for a Network Service

Advanced Exploitation: How to Find & Write a Buffer Overflow Exploit for a Network Service
// Null Byte « WonderHowTo

While our time with the Protostar VM from Exploit Exercises was lovely, we must move on to bigger things and harder challenges. Exploit Exercises' Fusion VM offers some more challenging binary exploitation levels for us to tackle. The biggest change is that these levels are all network services, which means we'll write our first remote exploits. In this guide on advanced exploitation techniques in our Exploit Development series, we'll take a look at the first level in the GNU debugger (GDB), write an exploit, and prepare for bigger challenges. Performing some code analysis will be the... more


Read in my feedly

Sent from my iPhone

CloudStack & Ceph day, Thursday, April 19 – roundup

CloudStack & Ceph day, Thursday, April 19 – roundup
// CloudStack Consultancy & CloudStack...

On Thursday, April 19 the CloudStack community joined up with the Ceph community for a combined event in London, and what an event it was! The meetup took place at the Early Excellence Centre at Canada Water, on a beautiful, sunny day (in fact the hottest day of the year so far), and registration started early with people enjoying coffee and pastries by the canal.

CloudStack & Ceph combined morning sessions

Once everyone had enjoyed breakfast, we all settled down as Wido den Hollander (of 42on) took to the podium to welcome everyone, explain a bit about how the event had come about, and talk through both technologies (CloudStack and Ceph), including how well they work together. Having set the scene Wido was ready to deliver the first presentation of the day, which was prepared to appeal to both communities – 'Building a highly available cloud with Ceph and CloudStack'. In this talk, Wido talked about how he came across and started using Ceph, and how his company utilises both Ceph and CloudStack to offer services. To quote Wido… "We manage tens of thousands of Virtual Machines running with Ceph and CloudStack and we think it is a winning and golden combination"! This first talk really got the event off to a flying start, with plenty of questions and interaction from the floor, and Wido's slides can be found here:

Next up was John Spray (RedHat) with a talk entitled 'Ceph in Kubernetes' John talked through Kubernetes, Rook, and why and how it all works together. Lot's more detail in John's slides:

After a short break, it was the turn of Phil Straw (SoftIron) to present. Phil's talk was 'Ceph on ARM64', and covered all the fundamentals including some real world results from SoftIton's appliance. Take a look through Phil's slides:

Sebastien Bretschneider (intelligence) was last up before lunch, with 'Our way to Ceph' – another great talk for both communities as he talked about his experience using Ceph with CloudStack. Starting with some background, Sebastien talked through lessons learnt and how the platform looks now. More information in Sebastien's slides:

CloudStack afternoon sessions

After lunch, the day split into two separate tracks, the first dedicated to CloudStack (the CloudStack European User Group – CSEUG), the second to Ceph. As with the morning, both rooms were very busy, with full agendas. As I focussed on CloudStack, I cover the CloudStack talks here (at the end of the article I include links to the Ceph presentations).

Giles Sirett (CSEUG chairman) brought the room to order and started with introductions and CloudStack news, which (as always) there is lots of, including new releases – 4.9.3, 4.10, and the latest LTS release – 4.11. Sticking with 4.11, Giles touched on some of the new features that have been introduced, such as new host HA framework, new CA framework and Prometheus integration (much more detail in Paul's later presentation). Giles also mentioned that we are moving towards zero-downtime upgrades – a highly anticipated and much needed enhancement! What is exciting about this community is that nearly all of the 4.11 features were contributed or instigated by USERS… and speaking of new features – this was someone's response to Citrix changing the XenServer licencing model:

Giles then advised us about some dates or our diaries – lots of upcoming CloudStack events, and shared how CloudStack is growing in awareness and use – we are seeing 800 downloads per month, just from our (ShapeBlue) repo, and have several new, large scale private CloudStack clouds being deployed this year. Please see Giles slides for much more information:

Once Giles had finished taking questions, he introduced the first presentation, and this was both Wido den Hollander and Mike Tutkowski (SolidFire) for a brief retrospective of Wido's year as VP
of the CloudStack project, and welcoming Mike as the new VP! This was a quick introduction, and next up was Antoine Coetsier (Exoscale) with his talk entitled 'Billing the cloud'.

Starting with Exoscale's background, Antoine talked through pros and cons of different approaches, and explained Exoscale's solution. Antoine's slides are here:

As mentioned earlier, next up was Paul Angus (ShapeBlue), with 'What's new in ACS 4.11', which includes 100s of updates, over 30 new features and the best automated test coverage yet. Paul went through user features, operator features and integrations. This demonstrated just how much work and development is going into CloudStack, and if you are interested to see just how much work and development, I urge you to read through Paul's comprehensive slides:

After a short break, Giles introduced the next talk of the day – Boyan Krosnov (Storpool), 'Building a software-defined cloud'. After a little background, Boyan started by explaining how a hyper-converged cloud infrastructure looks, and then talked about how Storpool works with CloudStack.

Ivan's talk was filmed:, and his slides can be found here:

Next up was Mike Tutkowski (SolidFire), who talked us through managed storage in CloudStack, and then presented a live demo so we could see first-hand how it works. As Mike works for SolidFire this was the storage he used for his demo!

The end of this last talk of the CloudStack track coincided with the end of the last Ceph talk next door, so we all congregated together to hear some last thoughts and acknowledgements from Wido. This was the first collaborative day between Ceph and CloudStack, and it was a resounding success, thanks in no small part to the vibrant communities that support these technologies. Bringing the event to a close, we all moved the conversations to a nearby pub, where the event continued late into a beautiful London evening.

Thanks to our sponsors, without which these great events wouldn't be possible – 42on, RedHat and ShapeBlue (us!) for providing the venue and refreshments, and to SoftIron for providing the evening drinks.

Ceph afternoon sessions

Wido den Hollander – 10 ways to break your Ceph cluster

Nick Fisk – low latency Ceph

The post CloudStack & Ceph day, Thursday, April 19 – roundup appeared first on The CloudStack Company.


Read in my feedly

Sent from my iPhone

ChefConf 2018: Announcing Main Stage Speakers

ChefConf 2018: Announcing Main Stage Speakers
// Chef Blog

It's just under a month until ChefConf 2018 kicks off in Chicago! We've been busy putting the finishing touches on the program for the event, May 22-25 at the Hyatt Regency Chicago. Today I'm excited to share more of our program with you, and some of the partners and customers who have agreed to spend the week with us sharing their stories of digital transformation.

Keynote Speakers

Both Wednesday and Thursday of the conference feature main stage "keynote" presentations which provide context and insight into the new features and scenarios supported by the Chef family of tools. Speakers on this year's main stage include:

Chef Customers

Chef users including the Gates Foundation, Toyota Financial Services, CSG, and more will take you through their IT and application delivery projects, and the impact of automation in delivering delight to their customers at cloud speed.

Chef Partners

Partners such as Microsoft, AWS, and more will show you how Chef Automate works with their platforms and clouds to help organizations deliver apps faster, and with greater insight, across both on-prem and cloud based estates.

Chef Leadership

Chef's own Adam Jacob, Barry Crist, Corey Scobie, Nathen Harvey and more will run through the latest innovations in both application and infrastructure modernization, with technical insights and live demos of the newest features and functionality in Chef Automate and Habitat.

View Agenda and Register

Stay tuned for more exciting speaker announcements as we get closer to the event! The full program can be viewed at Check out the agenda, learn about all of the events, including breakout technical sessions and full-day pre-conference workshops, and register for the event here.

The post ChefConf 2018: Announcing Main Stage Speakers appeared first on Chef Blog.


Read in my feedly

Sent from my iPhone

Wednesday, April 25, 2018

Connecting to a Windows Host

Connecting to a Windows Host
// Ansible Blog

Welcome to the first installment of our Windows-specific Getting Started series!

Would you like to automate some of your Windows hosts with Red Hat Ansible Tower, but don't know how to set everything up? Are you worried that Red Hat Ansible Engine won't be able to communicate with your Windows servers without installing a bunch of extra software? Do you want to easily automate everyone's best friend, Clippy?


Image source:

We can't help with the last thing, but if you said yes to the other two questions, you've come to the right place. In this post, we'll walk you through all the steps you need to take in order to set up and connect to your Windows hosts with Ansible Engine.

Why Automate Windows Hosts?

A few of the many things you can do for your Windows hosts with Ansible Engine include:

  • Starting, stopping and managing services
  • Pushing and executing custom PowerShell scripts
  • Managing packages with the Chocolatey package manager

In addition to connecting to and automating Windows hosts using local or domain users, you'll also be able to use runas to execute actions as the Administrator (the Windows alternative to Linux's sudo or su), so no privilege escalation ability is lost.

What's Required?

Before we start, let's go over the basic requirements. First, your control machine (where Ansible Engine will be executing your chosen Windows modules from) needs to run Linux. Second, Windows support has been evolving rapidly, so make sure to use the newest possible version of Ansible Engine to get the latest features!

For the target hosts, you should be running at least Windows 7 SP1 or later or Windows Server 2008 SP1 or later. You don't want to be running something from the 90's like Windows NT, because this might happen:


Image source:

Lastly, since Ansible connects to Windows machines and runs PowerShell scripts by using Windows Remote Management (WinRM) (as an alternative to SSH for Linux/Unix machines), a WinRM listener should be created and activated. The good news is, connecting to your Windows hosts can be done very easily and quickly using a script, which we'll discuss in the section below.

Step 1: Setting up WinRM

What's WinRM? It's a feature of Windows Vista and higher that lets administrators run management scripts remotely; it handles those connections by implementing the WS-Management Protocol, based on Simple Object Access Protocol (commonly referred to as SOAP). With WinRM, you can do cool stuff like access, edit and update data from local and remote computers as a network administrator.

The reason WinRM is perfect for using with Ansible Engine is because you can obtain hardware data from WS-Management protocol implementations running on non-Windows operating systems (in this specific case, Linux). It's basically like a translator that allows different types of operating systems to work together.

So, how do we connect?

With most versions of Windows, WinRM ships in the box but isn't turned on by default. There's a Configure Remoting for Ansible script you can run on the remote Windows machine (in a PowerShell console as an Admin) to turn on WinRM. To set up an https listener, build a self-signed cert and execute PowerShell commands, just run the script like in the example below (if you've got the .ps1 file stored locally on your machine):
Note: The win_psexec module will help you enable WinRM on multiple machines if you have lots of Windows hosts to set up in your environment.

For more information on WinRM and Ansible, check out the Windows Remote Management documentation page.

Step 2: Install Pywinrm

Since pywinrm dependencies aren't shipped with Ansible Engine (and these are necessary for using WinRM), make sure you install the pywinrm-related library on the machine that Ansible is installed on. The simplest method is to run pip install pywinrm in your Terminal.

Step 3: Set Up Your Inventory File Correctly

In order to connect to your Windows hosts properly, you need to make sure that you put in ansible_connection=winrm in the host vars section of your inventory file so that Ansible Engine doesn't just keep trying to connect to your Windows host via SSH.

Also, the WinRM connection plugin defaults to communicating via https, but it supports different modes like message-encrypted http. Since the "Configure Remoting for Ansible" script we ran earlier set things up with the self-signed cert, we need to tell Python, "Don't try to validate this certificate because it's not going to be from a valid CA." So in order to prevent an error, one more thing you need to put into the host vars section is: ansible_winrm_server_cert_validation=ignore

Just so you can see it in one place, here is an example host file (please note, some details for your particular environment will be different):

[win]     [win:vars]  ansible_user=vagrant  ansible_password=password  ansible_connection=winrm  ansible_winrm_server_cert_validation=ignore  

Step 4: Test Connection

Let's check to see if everything is working. To do this, go to your control node's terminal and type ansible [host_group_name_in_inventory_file] -i hosts -m win_ping. Your output should look like this:


Note: The win_ prefix on all of the Windows modules indicates that they are implemented in PowerShell and not Python.

Troubleshooting WinRM

Because WinRM can be configured in so many different ways, errors that seem Ansible Engine-related can actually be due to problems with host setup instead. Some examples of WinRM errors that you might see include an HTTP 401 or HTTP 500 error, timeout issues or a connection refusal. To get tips on how to solve these problems, visit the Common WinRM Issues section of our Windows Setup documentation page.


You should now be ready to automate your Windows hosts using Ansible, without the need to install a ton of additional software! Keep in mind, however, that even if you've followed the instructions above, some Windows modules have additional specifications (e.g., a newer OS or more recent PowerShell version). The best way to figure out if you're meeting the right requirements is to check the module-specific documentation pages.

For more in-depth information on how to use Ansible Engine to automate your Windows hosts, check out our Windows FAQ and Windows Support documentation page and stay tuned for more Windows-related blog posts!


Read in my feedly

Sent from my iPhone