Thursday, April 26, 2018

Finding Packages for Kali Linux

Finding Packages for Kali Linux
// Kali Linux

In an earlier post, we covered Package Management in Kali Linux. With the ease of installation that APT provides, we have the choice amongst tens of thousands of packages but the downside is, we have tens of thousands of packages. Finding out what packages are available and finding the one(s) we want can be a daunting task, particularly for newcomers to Linux. In this post, we will cover three utilities that can be used to search through the haystack and help you take advantage of the vast open source ecosystem.


Of the various interfaces available to search for packages, apt-cache is the most basic and rudimentary of them all. However,
it is also the interface we tend to use most often because it is fast, easy, and efficient. By default, apt-cache searches for a given term in package names as well as their descriptions. For example, knowing that all Kali Linux metapackages include 'kali-linux' in their names, we can easily search for all of them.

root@kali:~# apt-cache search kali-linux
kali-linux - Kali Linux base system
kali-linux-all - Kali Linux - all packages
kali-linux-forensic - Kali Linux forensic tools
kali-linux-full - Kali Linux complete system
kali-linux-gpu - Kali Linux GPU tools
kali-linux-nethunter - Kali Linux Nethunter tools
kali-linux-pwtools - Kali Linux password cracking tools
kali-linux-rfid - Kali Linux RFID tools
kali-linux-sdr - Kali Linux SDR tools
kali-linux-top10 - Kali Linux Top 10 tools
kali-linux-voip - Kali Linux VoIP tools
kali-linux-web - Kali Linux webapp assessment tools
kali-linux-wireless - Kali Linux wireless tools

In many cases, apt-cache returns far too many results because it searches in package descriptions. The searches can be limited to the package names themselves by using the –names-only option.

root@kali:~# apt-cache search nmap | wc -l
root@kali:~# apt-cache search nmap --names-only
dnmap - Distributed nmap framework
fruitywifi-module-nmap - nmap module for fruitywifi
nmap-dbgsym - debug symbols for nmap
python-libnmap - Python 2 NMAP library
python-libnmap-doc - Python NMAP Library (common documentation)
python3-libnmap - Python 3 NMAP library
libnmap-parser-perl - parse nmap scan results with perl
nmap - The Network Mapper
nmap-common - Architecture independent files for nmap
zenmap - The Network Mapper Front End
nmapsi4 - graphical interface to nmap, the network scanner
python-nmap - Python interface to the Nmap port scanner
python3-nmap - Python3 interface to the Nmap port scanner

Since apt-cache has such wonderfully greppable output, we can keep filtering results until they're at a manageable number.

root@kali:~# apt-cache search nmap --names-only | egrep -v '(python|perl)'
dnmap - Distributed nmap framework
fruitywifi-module-nmap - nmap module for fruitywifi
nmap - The Network Mapper
nmap-common - Architecture independent files for nmap
nmap-dbgsym - debug symbols for nmap
nmapsi4 - graphical interface to nmap, the network scanner
zenmap - The Network Mapper Front End

You can further filter down the search results but once you start chaining together a few commands, that's generally a good indication that it's time to reach for a different tool.


The aptitude application is a very close cousin of apt and apt-get except it also includes a very useful ncurses interface. It is not included in Kali by default but it can quickly be installed as follows.

root@kali:~# apt update && apt -y install aptitude

After installation, running aptitude without any options will launch the ncurses interface. One of the first things you will notice is that you can quickly and easily browse through packages by category, which greatly helps with sorting through the thousands of available packages.

aptitude tools by category

To search for a package, either press the / character or select 'Find' under the 'Search' menu. As you enter your query, the package results will be updated dynamically.

searching for a package in aptitude

Once you've located a package of interest, you can mark it for installation with the + character or to remove/deselect it, the character.

package marked for installation

At this point, you can keep searching for other packages to mark for installation or removal. When you're ready to install, press the g key to view the summary of the actions to be taken.

aptitude summary screen

If you're satisfied with the proposed changes, press g again and aptitude will complete the package installations as usual.

The Internet

If you want to restrict your searches to tools that are packaged by the Kali team, the easiest way to do so is probably by using the Google site search operator.

Google search for Kali packages

Learn More

Hopefully, this post will help you answer whether or not a certain tool is available in Kali (or Debian). For a much more detailed treatment of package management, we encourage you to check out the Kali Training site.


Read in my feedly

Sent from my iPhone

Docker for Desktop is Certified Kubernetes

Docker for Desktop is Certified Kubernetes
// Docker Blog

Certified KubernetesCertified Kubernetes

"You are now Certified Kubernetes." With this comment, Docker for Windows and Docker for Mac passed the Kubernetes conformance tests. Kubernetes has been available in Docker for Mac and Docker for Windows since January, having first being announced at DockerCon EU last year. But why is this important to the many of you who are using Docker for Windows and Docker for Mac?

Kubernetes is designed to be a platform that others can build upon. As with any similar project, the risk is that different distributions vary enough that applications aren't really portable. The Kubernetes project has always been aware of that risk – and this led directly to forming the Conformance Working Group. The group owns a test suite that anyone distributing Kubernetes can run, and submit the results for to attain official certification. This test suite checks that Kubernetes behaves like, well, Kubernetes; that the various APIs are exposed correctly and that applications built using the core APIs will run successfully. In fact, our enterprise container platform, Docker Enterprise Edition, achieved certification using the same test suite  You can find more about the test suite at

This is important for Docker for Windows and Docker for Mac because we want them to be the easiest way for you to develop your applications for Docker and for Kubernetes. Without being able to easily test your application and importantly its configuration, locally you always risk the dreaded "works on my machine" situation.

With this announcement following hot on the heels of the release last week of Docker Enterprise Edition 2.0 and it's Kubernetes support, it's perfect timing for us to be heading off to KubeCon EU in Copenhagen next week. If you're interested in why we're adopting Kubernetes, or in what we have planned next, please come find us on booth #S-C30, sign up for the Docker and Kubernetes workshop or attend the following talks.

Can't make it to KubeCon Europe? DockerCon 2018 in San Francisco is another great opportunity for you to learn about integration between Kubernetes and the Docker Platform.

#Docker for Desktop (Mac and @windows) is Certified @Kubernetesio
Click To Tweet

Useful Links:

The post Docker for Desktop is Certified Kubernetes appeared first on Docker Blog.


Read in my feedly

Sent from my iPhone

Using Ansible and Ansible Tower with shared roles

Using Ansible and Ansible Tower with shared roles
// Ansible Blog

Roles are an essential part of Ansible, and help in structuring your automation content. The idea is to have clearly defined roles for dedicated tasks. During your automation code, the roles will be called by the Ansible Playbooks.

Since roles usually have a well defined purpose, they make it easy to reuse your code for yourself, but also in your team. And you can even share roles with the global community. In fact, the Ansible community created Ansible Galaxy as a central place to display, search and view Ansible roles from thousands of people.

So what does a role look like? Basically it is a predefined structure of folders and files to hold your automation code. There is a folder for your templates, a folder to keep files with tasks, one for handlers, another one for your default variables, and so on:

tasks/   handlers/   files/   templates/   vars/   defaults/   meta/  

In folders which contain Ansible code - like tasks, handlers, vars, defaults - there are main.yml files. Those contain the relevant Ansible bits. In case of the tasks directory, they often include other yaml files within the same directory. Roles even provide ways to test your automation code - in an automated fashion, of course.

This post will show how roles can be shared with others, be used in your projects and how this works with Red Hat Ansible Tower.

Share Roles via Repositories

Roles can be part of your project repository. They usually sit underneath a dedicated roles/ directory. But keeping roles in your own repository makes it hard to share them with others, to be reused and improved by them. If someone works on a different team, or on a different project, they might not have access to your repository - or they may use their own anyway. So even if you send them a copy of your role, they could add it to their own repository, making it hard to exchange improvements, bug fixes and changes across totally different repositories.

For that reason, a better way is to keep a role in its own repository. That way it can be easily shared and improved. However, to be available to a playbook, the role still needs to be included. Technically there are multiple ways to do that.

For example there can be a global roles directory outside your project where all roles are kept. This can be referenced in ansible.cfg. However, this requires that all developer setups and also the environment in which the automation is finally executed have the same global directory structure. This is not very practical.

When Git is used as the version control system, there is also the possibility of importing roles from other repositories via Git submodules, or even using Git subtrees. However, this requires quite some knowledge about advanced Git features by each and everyone using it - so it is far from simple.

The best way to make shared roles available to your playbooks is to use a function built into Ansible itself: by using the command ansible-galaxy , ansible galaxy can read a file specifying which external roles need to be imported for a successful Ansible run: requirements.yml. It lists external roles and their sources. If needed, it can also point to a specific version:

# from GitHub  - src:   # from GitHub, overriding the name and specifying a tag   - src:     version: master     name: nginx_role   # from Bitbucket   - src: git+     version: v1.4 # from galaxy   - src: yatesr.timezone  

The file can be used via the command ansible-galaxy. It reads the file and downloads all specified roles to the appropriate path:

ansible-galaxy install -r roles/requirements.yml   - extracting nginx to /home/rwolters/ansible/roles/nginx   - nginx was installed successfully   - extracting nginx_role to   /home/rwolters/ansible/roles/nginx_role   - nginx_role (master) was installed successfully   ...  

The output also highlights when a specific version was downloaded. You will find a copy of each role in your roles/ directory - so make sure that you do not accidentally add the downloaded roles to your repository! The best option is to add them to the .gitignore file.

This way, roles can be imported into the project and are available to all playbooks while they are still shared via a central repository. Changes to the role need to be made in the dedicated repository - which ensures that no light-minded and project specific changes are done in the role.

At the same time the version attribute in requirements.yml ensures that the used role can be pinned to a certain release tag value, commit hash, or branch name. This is useful in case the development of a role is quickly moving forward, but your project has longer development cycles.

Using Roles in Ansible Tower

If you use automation on larger, enterprise scales you most likely will start using Ansible Tower sooner or later. So how do roles work with Ansible Tower? In fact - just like mentioned above. Each time Ansible Tower checks out a project it looks for a roles/requirements.yml. If such a file is present, a new version of each listed role is copied to the local checkout of the project and thus available to the relevant playbooks.

That way shared roles can easily be reused in Ansible Tower - it is built in right from the start!

Best Practices and Things to Keep in Mind

There are a few best practices around sharing of Ansible roles that make your life easier. The first is the naming and location of the roles directory. While it is possible to name the directory any way via the roles_path in ansible.cfg, we strongly recommend to stick to the directory name roles, sitting in the root of your project directory. Do not choose another name for it or move it to some subdirectory.

The same is true for requirements.yml: have one requirements.yml only, and keep it at roles/requirements.yml. While it is technically possible to have multiple files and spread them across your project, this will not work when the project is imported into Ansible Tower.

Also, if the roles are not only shared among multiple users, but are also developed with others or not by you at all, it might make sense to pin the role to the actual commit you've tested your setup against. That way you will avoid unwanted changes in the role behaviour.

More Information

Find, reuse, and share the best Ansible content on Ansible Galaxy.

Learn more about roles on Ansible Docs.


Read in my feedly

Sent from my iPhone

Unveiling Microsoft Server 2019 – Here’s Everything You Should Know About It

Unveiling Microsoft Server 2019 – Here's Everything You Should Know About It
// StarWind Blog

No ratings yet.

We live in an age where our technology needs are ever evolving. Right now we may need things we didn't even know existed until a few weeks back because every little feature or upgrade brings massive value into our everyday lives. The same is the case with the preview release of Microsoft Server 2019. Let's take a closer peek at what features Microsoft has in store to serve our evolving needs with its new release.

Microsoft is all set to launch its new and improved Windows Server 2019 in the second half of the year 2018 that claims to be a super-enhanced version of Windows Server 2016. Although, Windows Server 2016 is the widely used version in Windows Server generation. They have released the first build of the preview version of their new operating system. The new server comes with the promise of many new features but the desktop-like experience, Hyper-converged infrastructure, hybrid cloud, platform applications and security remain the priority areas for this newer Windows where they have brought in mentionable enhancements.

The Desktop Experience

The desktop-like Windows Server experience is probably the most exciting thing for this release which was missing in an earlier version of Semi-Annual Channel versions. The previous Semi-Annual Channel releases were GUI-less and only supported Server Core and Nano configurations. Owing to the LTSC release by Microsoft, tech gurus, and IT buffs will be able to utilize a desktop GUI for Windows Server 2019 hand-in-hand with the GUI-less interface of the Server Core and Nano configuration.

Hybrid Cloud Goal and Project Honolulu

Hybrid clouds are the buzz now, so Microsoft has kept the tradition alive and made it a priority functionality area, aiming for Hyper-V virtual machines to connect and live migrate between Azure and on-premises Hyper-V hosts. One of the predictions that are going around with regards to the new Windows Server 2019 is that Microsoft will do away with the Server manager to bring in a more efficient web-based server management tool – Project Honolulu. Eventually, the tool will provide the necessary support to manage on-premises and cloud resources both via a single interface. This product evolution is all too familiar to us because we have previously seen PowerShell metamorphosing from a basic support tool to a unified management tool.

Enhanced Security

With huge investment in the security features of Windows Server 2016, Microsoft has gone much further in the rollout of the new Windows Server 2019. Not only has Microsoft made security enhancements like Shielded VM support for Linux VMs, but it has also added a built-in tool namely the "Windows Defender Advanced Threat Protection" with the purpose of detecting, assessing security breaches and potential malicious attacks along with responding to them by automatically alerting users and blocking them. The inclusion of the Windows Defender ATP in Windows Server 2019 will help users prevent security compromises using deep Kernel, memory sensors, data storage, security integrity components and network transport.

Platform Management and Linux Interoperability

When it comes to Platform Applications, Windows Server 2019 has brought in some remarkable upgrades to make it efficient and hassle-free. The base container image has been reduced to third of its size to decrease the images' download time by a whopping 72 percent. There are many speculations around the topic if the new server will or won't support the Kubernetes. But like most other clusters, Kubernetes will be supported by the new server and will go through some upgrades in the coming quarter as well. Another key area in Platform Applications will be to improve the overall user journey in terms of engagement and navigation. The navigating environment experience will be improved by allowing Linux users to utilize industry standards like Tar and Curl to bring Linux scripts into Windows.

Enterprise-Grade HCI

HCI (Hyper-converged infrastructure) combines storage, compute, and networking in a software-driven appliance and users of x86 servers now realize what the worth of HCI really is. It has indeed become the most demanded trend in today's server industry. The HCI market has shown astonishing growth of 64% according to IDC and some experts including Gartner have said that by 2019, it will become a $5 billion market. The integration of Windows Server 2019 with Project Honolulu has made HCI deployment management more scalable, reliable, and efficient, consequently making the management of everyday activity on HCI environments much easier and simpler.

Wrapping It Up

Windows Server 2019 is all set to be launched soon for the general public but if you are a potential user or an IT buff who can't wait to get his hands on it, join the Windows Insider program for preview and tool testing. It will give you a fair idea of what to expect from the upcoming launch.

Microsoft would love to hear from you! Use the Windows 10 Insider device and the Feedback Hub application to share your views on the preview. When you enter the app, select the Server category, relevant subcategory followed by entering your build number to provide your feedback. You can also visit the Windows Server space in the Tech community to share your views and learn from the best minds.


Please rate this


Read in my feedly

Sent from my iPhone

Advanced Exploitation: How to Find & Write a Buffer Overflow Exploit for a Network Service

Advanced Exploitation: How to Find & Write a Buffer Overflow Exploit for a Network Service
// Null Byte « WonderHowTo

While our time with the Protostar VM from Exploit Exercises was lovely, we must move on to bigger things and harder challenges. Exploit Exercises' Fusion VM offers some more challenging binary exploitation levels for us to tackle. The biggest change is that these levels are all network services, which means we'll write our first remote exploits. In this guide on advanced exploitation techniques in our Exploit Development series, we'll take a look at the first level in the GNU debugger (GDB), write an exploit, and prepare for bigger challenges. Performing some code analysis will be the... more


Read in my feedly

Sent from my iPhone

CloudStack & Ceph day, Thursday, April 19 – roundup

CloudStack & Ceph day, Thursday, April 19 – roundup
// CloudStack Consultancy & CloudStack...

On Thursday, April 19 the CloudStack community joined up with the Ceph community for a combined event in London, and what an event it was! The meetup took place at the Early Excellence Centre at Canada Water, on a beautiful, sunny day (in fact the hottest day of the year so far), and registration started early with people enjoying coffee and pastries by the canal.

CloudStack & Ceph combined morning sessions

Once everyone had enjoyed breakfast, we all settled down as Wido den Hollander (of 42on) took to the podium to welcome everyone, explain a bit about how the event had come about, and talk through both technologies (CloudStack and Ceph), including how well they work together. Having set the scene Wido was ready to deliver the first presentation of the day, which was prepared to appeal to both communities – 'Building a highly available cloud with Ceph and CloudStack'. In this talk, Wido talked about how he came across and started using Ceph, and how his company utilises both Ceph and CloudStack to offer services. To quote Wido… "We manage tens of thousands of Virtual Machines running with Ceph and CloudStack and we think it is a winning and golden combination"! This first talk really got the event off to a flying start, with plenty of questions and interaction from the floor, and Wido's slides can be found here:

Next up was John Spray (RedHat) with a talk entitled 'Ceph in Kubernetes' John talked through Kubernetes, Rook, and why and how it all works together. Lot's more detail in John's slides:

After a short break, it was the turn of Phil Straw (SoftIron) to present. Phil's talk was 'Ceph on ARM64', and covered all the fundamentals including some real world results from SoftIton's appliance. Take a look through Phil's slides:

Sebastien Bretschneider (intelligence) was last up before lunch, with 'Our way to Ceph' – another great talk for both communities as he talked about his experience using Ceph with CloudStack. Starting with some background, Sebastien talked through lessons learnt and how the platform looks now. More information in Sebastien's slides:

CloudStack afternoon sessions

After lunch, the day split into two separate tracks, the first dedicated to CloudStack (the CloudStack European User Group – CSEUG), the second to Ceph. As with the morning, both rooms were very busy, with full agendas. As I focussed on CloudStack, I cover the CloudStack talks here (at the end of the article I include links to the Ceph presentations).

Giles Sirett (CSEUG chairman) brought the room to order and started with introductions and CloudStack news, which (as always) there is lots of, including new releases – 4.9.3, 4.10, and the latest LTS release – 4.11. Sticking with 4.11, Giles touched on some of the new features that have been introduced, such as new host HA framework, new CA framework and Prometheus integration (much more detail in Paul's later presentation). Giles also mentioned that we are moving towards zero-downtime upgrades – a highly anticipated and much needed enhancement! What is exciting about this community is that nearly all of the 4.11 features were contributed or instigated by USERS… and speaking of new features – this was someone's response to Citrix changing the XenServer licencing model:

Giles then advised us about some dates or our diaries – lots of upcoming CloudStack events, and shared how CloudStack is growing in awareness and use – we are seeing 800 downloads per month, just from our (ShapeBlue) repo, and have several new, large scale private CloudStack clouds being deployed this year. Please see Giles slides for much more information:

Once Giles had finished taking questions, he introduced the first presentation, and this was both Wido den Hollander and Mike Tutkowski (SolidFire) for a brief retrospective of Wido's year as VP
of the CloudStack project, and welcoming Mike as the new VP! This was a quick introduction, and next up was Antoine Coetsier (Exoscale) with his talk entitled 'Billing the cloud'.

Starting with Exoscale's background, Antoine talked through pros and cons of different approaches, and explained Exoscale's solution. Antoine's slides are here:

As mentioned earlier, next up was Paul Angus (ShapeBlue), with 'What's new in ACS 4.11', which includes 100s of updates, over 30 new features and the best automated test coverage yet. Paul went through user features, operator features and integrations. This demonstrated just how much work and development is going into CloudStack, and if you are interested to see just how much work and development, I urge you to read through Paul's comprehensive slides:

After a short break, Giles introduced the next talk of the day – Boyan Krosnov (Storpool), 'Building a software-defined cloud'. After a little background, Boyan started by explaining how a hyper-converged cloud infrastructure looks, and then talked about how Storpool works with CloudStack.

Ivan's talk was filmed:, and his slides can be found here:

Next up was Mike Tutkowski (SolidFire), who talked us through managed storage in CloudStack, and then presented a live demo so we could see first-hand how it works. As Mike works for SolidFire this was the storage he used for his demo!

The end of this last talk of the CloudStack track coincided with the end of the last Ceph talk next door, so we all congregated together to hear some last thoughts and acknowledgements from Wido. This was the first collaborative day between Ceph and CloudStack, and it was a resounding success, thanks in no small part to the vibrant communities that support these technologies. Bringing the event to a close, we all moved the conversations to a nearby pub, where the event continued late into a beautiful London evening.

Thanks to our sponsors, without which these great events wouldn't be possible – 42on, RedHat and ShapeBlue (us!) for providing the venue and refreshments, and to SoftIron for providing the evening drinks.

Ceph afternoon sessions

Wido den Hollander – 10 ways to break your Ceph cluster

Nick Fisk – low latency Ceph

The post CloudStack & Ceph day, Thursday, April 19 – roundup appeared first on The CloudStack Company.


Read in my feedly

Sent from my iPhone

ChefConf 2018: Announcing Main Stage Speakers

ChefConf 2018: Announcing Main Stage Speakers
// Chef Blog

It's just under a month until ChefConf 2018 kicks off in Chicago! We've been busy putting the finishing touches on the program for the event, May 22-25 at the Hyatt Regency Chicago. Today I'm excited to share more of our program with you, and some of the partners and customers who have agreed to spend the week with us sharing their stories of digital transformation.

Keynote Speakers

Both Wednesday and Thursday of the conference feature main stage "keynote" presentations which provide context and insight into the new features and scenarios supported by the Chef family of tools. Speakers on this year's main stage include:

Chef Customers

Chef users including the Gates Foundation, Toyota Financial Services, CSG, and more will take you through their IT and application delivery projects, and the impact of automation in delivering delight to their customers at cloud speed.

Chef Partners

Partners such as Microsoft, AWS, and more will show you how Chef Automate works with their platforms and clouds to help organizations deliver apps faster, and with greater insight, across both on-prem and cloud based estates.

Chef Leadership

Chef's own Adam Jacob, Barry Crist, Corey Scobie, Nathen Harvey and more will run through the latest innovations in both application and infrastructure modernization, with technical insights and live demos of the newest features and functionality in Chef Automate and Habitat.

View Agenda and Register

Stay tuned for more exciting speaker announcements as we get closer to the event! The full program can be viewed at Check out the agenda, learn about all of the events, including breakout technical sessions and full-day pre-conference workshops, and register for the event here.

The post ChefConf 2018: Announcing Main Stage Speakers appeared first on Chef Blog.


Read in my feedly

Sent from my iPhone

Wednesday, April 25, 2018

Connecting to a Windows Host

Connecting to a Windows Host
// Ansible Blog

Welcome to the first installment of our Windows-specific Getting Started series!

Would you like to automate some of your Windows hosts with Red Hat Ansible Tower, but don't know how to set everything up? Are you worried that Red Hat Ansible Engine won't be able to communicate with your Windows servers without installing a bunch of extra software? Do you want to easily automate everyone's best friend, Clippy?


Image source:

We can't help with the last thing, but if you said yes to the other two questions, you've come to the right place. In this post, we'll walk you through all the steps you need to take in order to set up and connect to your Windows hosts with Ansible Engine.

Why Automate Windows Hosts?

A few of the many things you can do for your Windows hosts with Ansible Engine include:

  • Starting, stopping and managing services
  • Pushing and executing custom PowerShell scripts
  • Managing packages with the Chocolatey package manager

In addition to connecting to and automating Windows hosts using local or domain users, you'll also be able to use runas to execute actions as the Administrator (the Windows alternative to Linux's sudo or su), so no privilege escalation ability is lost.

What's Required?

Before we start, let's go over the basic requirements. First, your control machine (where Ansible Engine will be executing your chosen Windows modules from) needs to run Linux. Second, Windows support has been evolving rapidly, so make sure to use the newest possible version of Ansible Engine to get the latest features!

For the target hosts, you should be running at least Windows 7 SP1 or later or Windows Server 2008 SP1 or later. You don't want to be running something from the 90's like Windows NT, because this might happen:


Image source:

Lastly, since Ansible connects to Windows machines and runs PowerShell scripts by using Windows Remote Management (WinRM) (as an alternative to SSH for Linux/Unix machines), a WinRM listener should be created and activated. The good news is, connecting to your Windows hosts can be done very easily and quickly using a script, which we'll discuss in the section below.

Step 1: Setting up WinRM

What's WinRM? It's a feature of Windows Vista and higher that lets administrators run management scripts remotely; it handles those connections by implementing the WS-Management Protocol, based on Simple Object Access Protocol (commonly referred to as SOAP). With WinRM, you can do cool stuff like access, edit and update data from local and remote computers as a network administrator.

The reason WinRM is perfect for using with Ansible Engine is because you can obtain hardware data from WS-Management protocol implementations running on non-Windows operating systems (in this specific case, Linux). It's basically like a translator that allows different types of operating systems to work together.

So, how do we connect?

With most versions of Windows, WinRM ships in the box but isn't turned on by default. There's a Configure Remoting for Ansible script you can run on the remote Windows machine (in a PowerShell console as an Admin) to turn on WinRM. To set up an https listener, build a self-signed cert and execute PowerShell commands, just run the script like in the example below (if you've got the .ps1 file stored locally on your machine):
Note: The win_psexec module will help you enable WinRM on multiple machines if you have lots of Windows hosts to set up in your environment.

For more information on WinRM and Ansible, check out the Windows Remote Management documentation page.

Step 2: Install Pywinrm

Since pywinrm dependencies aren't shipped with Ansible Engine (and these are necessary for using WinRM), make sure you install the pywinrm-related library on the machine that Ansible is installed on. The simplest method is to run pip install pywinrm in your Terminal.

Step 3: Set Up Your Inventory File Correctly

In order to connect to your Windows hosts properly, you need to make sure that you put in ansible_connection=winrm in the host vars section of your inventory file so that Ansible Engine doesn't just keep trying to connect to your Windows host via SSH.

Also, the WinRM connection plugin defaults to communicating via https, but it supports different modes like message-encrypted http. Since the "Configure Remoting for Ansible" script we ran earlier set things up with the self-signed cert, we need to tell Python, "Don't try to validate this certificate because it's not going to be from a valid CA." So in order to prevent an error, one more thing you need to put into the host vars section is: ansible_winrm_server_cert_validation=ignore

Just so you can see it in one place, here is an example host file (please note, some details for your particular environment will be different):

[win]     [win:vars]  ansible_user=vagrant  ansible_password=password  ansible_connection=winrm  ansible_winrm_server_cert_validation=ignore  

Step 4: Test Connection

Let's check to see if everything is working. To do this, go to your control node's terminal and type ansible [host_group_name_in_inventory_file] -i hosts -m win_ping. Your output should look like this:


Note: The win_ prefix on all of the Windows modules indicates that they are implemented in PowerShell and not Python.

Troubleshooting WinRM

Because WinRM can be configured in so many different ways, errors that seem Ansible Engine-related can actually be due to problems with host setup instead. Some examples of WinRM errors that you might see include an HTTP 401 or HTTP 500 error, timeout issues or a connection refusal. To get tips on how to solve these problems, visit the Common WinRM Issues section of our Windows Setup documentation page.


You should now be ready to automate your Windows hosts using Ansible, without the need to install a ton of additional software! Keep in mind, however, that even if you've followed the instructions above, some Windows modules have additional specifications (e.g., a newer OS or more recent PowerShell version). The best way to figure out if you're meeting the right requirements is to check the module-specific documentation pages.

For more in-depth information on how to use Ansible Engine to automate your Windows hosts, check out our Windows FAQ and Windows Support documentation page and stay tuned for more Windows-related blog posts!


Read in my feedly

Sent from my iPhone

Practical Guide to Cloud Migration with Chef

Practical Guide to Cloud Migration with Chef
// Chef Blog

Organizations are migrating workloads to the cloud at breakneck speed, and it's not difficult to see why. Cloud providers offer the ability to provision new environments and infrastructure on demand, allowing fleets to be scaled with unparalleled speed and efficiency. Costly and complex to maintain network and storage hardware can be replaced with vendor-managed services. The benefits of a cloud migration are numerous and profound, but migrating successfully takes careful planning.

Our latest white paper, Practical Guide to Cloud Migration with Chef, focuses first on articulating the challenges organizations face when planning a cloud deployment. Though cloud vendors provide a variety of tools to address day-to-day operational needs, understanding where their responsibilities end and yours begin can be difficult at first glance. Even once you're familiar with your responsibilities in the cloud, the ability manage newly created cloud environments alongside the traditional data centers your teams are already responsible for requires consistency in tooling and workflows to ensure work isn't duplicated between these environments. Finally, as you continue to grow, you may look to deploy content into multiple clouds, and being able to adapt existing processes to multiple vendors becomes crucial.

Thankfully, Chef helps address all of these challenges! The remainder of the paper provides examples of how Chef's abstractions and integrations can help you understand and verify your responsibilities in the cloud, maintain hybrid environments under a single configuration codebase, and apply audits and configurations consistently across cloud environments with a single workflow. It also touches on how Chef Automate can be used to maintain visibility into how your environments are changing, whether in a single environment, or across a multitude of platforms and cloud vendors.

Download the Whitepaper Practical Guide to Cloud Migration with Chef

What's Next?

The post Practical Guide to Cloud Migration with Chef appeared first on Chef Blog.


Read in my feedly

Sent from my iPhone

Tuesday, April 24, 2018

How to Automate Brute-Force Attacks for Nmap Scans

How to Automate Brute-Force Attacks for Nmap Scans
// Null Byte « WonderHowTo

Using Hydra, Ncrack, and other brute-forcing tools to crack passwords for the first time can be frustrating and confusing. To ease into the process, let's discuss automating and optimizing brute-force attacks for potentially vulnerable services such as SMTP, SSH, IMAP, and FTP discovered by Nmap, a popular network scanning utility. BruteSpray, developed by Jacob Robles and Shane Young, is a Python script capable of processing an Nmap scan output and automating brute-force attacks against discovered services using Medusa, a popular brute-forcing tool. BruteSpray is the much-needed nexus that... more


Read in my feedly

Sent from my iPhone

How to Build a Beginner Hacking Kit with the Raspberry Pi 3 Model B+

How to Build a Beginner Hacking Kit with the Raspberry Pi 3 Model B+
// Null Byte « WonderHowTo

If you want to follow Null Byte tutorials and try out Kali Linux, the Raspberry Pi is a perfect way to start. In 2018, the Raspberry Pi 3 Model B+ was released featuring a better CPU, Wi-Fi, Bluetooth, and Ethernet built in. Our recommended Kali Pi kit for beginners learning ethical hacking on a budget runs the "Re4son" Kali kernel and includes a compatible wireless network adapter and a USB Rubber Ducky. You should be using a system separate from your day-to-day computer for testing out hacking tools and downloading hacking software, and a Raspberry Pi brings the price of keeping your... more


Read in my feedly

Sent from my iPhone

How to Create Your Own Search Engine for More Privacy & Zero Trust Issues

How to Create Your Own Search Engine for More Privacy & Zero Trust Issues
// Null Byte « WonderHowTo

While there are a variety of privacy-focused search engines available like StartPage and DuckDuckGo, nothing can offer the complete trust offered by creating one's own search engine. For complete trust and security, Searx can be used as free metasearch engine which can be hosted locally and index results from over 70 different search engines. Search engines inevitably carry some traces of metadata about anyone who uses them, even if just temporarily. If you don't want to trust this data to a third-party search engine, the only solution is to host your own. One could choose to host this on an... more


Read in my feedly

Sent from my iPhone

HashiCorp Vault 0.10

HashiCorp Vault 0.10
// Hashicorp Blog

We are excited to announce the release of HashiCorp Vault 0.10. Vault is an identity-based security product that provides secrets management, encryption as a service, and identity and access management, leveraging any trusted source of identity to enforce access to systems, secrets, and applications.

The 0.10 release of Vault delivers new features to help with automating secrets management and enhancing Vault's ability to operate natively in multi-cloud environments. In addition to this new functionality Vault 0.10 will open source the Web UI.

New features in 0.10 include:

  • K/V Secrets Engine v2 with Secret Versioning: Vault's Key-Value Secrets Engine now supports additional features, including secret versioning and check-and-set operations.
  • Open Source Vault Web UI: The previously Enterprise-only UI has been made open source and is now released in all versions of Vault along with many enhancements.
  • Root DB Credential Rotation: Root/admin credentials for databases controlled by Vault's Combined DB secrets engine can now be securely rotated with only Vault knowing the new credentials.
  • Azure Auth Method: Azure Machines can now log into Vault via their Azure Active Directory credentials.
  • GCP Secrets Engine: Vault can now create dynamic IAM credentials for accessing Google Cloud Platform environments.

The release also includes additional new features, secure workflow enhancements, general improvements, and bug fixes. The Vault 0.10 changelog provides a full list of features, enhancements, and bug fixes.

As always, we send a big thank-you to our community for their ideas, bug reports, and pull requests.

K/V Secrets Engine v2

In Vault 0.10 is a revamped Key/Value Secrets Engine that allows for secrets to be versioned and to be updated with check-and-set operations. Multiple versions of a single value can be read and written within Vault.

These features are available via the API as well as the new vault kv subcommand within the CLI. Because the API is different between versions 1 and 2 of the K/V Secrets Engine, for backwards compatibility, in 0.10 new K/V mounts created within Vault will not support versions unless explicitly enabled. Existing K/V mounts can be upgraded to enable versioning, however support for downgrading K/V mounts to a non-versioned K/V is not supported. The new vault kv subcommand transparently handles the changes in API to make it easy to use both versions of the secrets engine.

To mount a new, versioned instance of the K/V Secrets engine, issue the following CLI command:

$ vault secrets enable kv-v2

Existing non-versioned K/V instances can be upgraded from the CLI with enable-versioning:

$ vault kv enable-versioning secret/ Success! Tuned the secrets engine at: secret/

Writing data to a versioned K/V mount via the vault kv subcommand is similar to writing secrets via vault write:

``` $ vault kv put secret/my-secret my-value=shh Key Value

createdtime 2018-03-30T22:11:48.589157362Z deletiontime n/a destroyed false version 1 ```

The new versioned K/V mounts (and the vault kv subcommand) support writing multiple versions. This can be performed with the optional Check and Set (or -cas flag). If -cas=0 the write will only be performed if the key doesn't exist. If the index is non-zero the write will only be allowed if the key's current version matches the version specified in the cas parameter.

``` $ vault kv put -cas=1 secret/my-secret my-value=itsasecret Key Value

createdtime 2018-03-30T22:18:37.124228658Z deletiontime n/a destroyed false version 2 ```

When now reading the secret via kv get, the current version is shown:

``` $ vault kv get secret/my-secret ====== Metadata ====== Key Value

createdtime 2018-03-30T22:18:37.124228658Z deletiontime n/a destroyed false version 2

====== Data ====== Key Value

my-value itsasecret ```

However previous versions of the secret are still accessible via the -version flag:

``` $ vault kv get -version=1 secret/my-secret ====== Metadata ====== Key Value

createdtime 2018-03-30T22:16:39.808909557Z deletiontime n/a destroyed false version 1

====== Data ====== Key Value

my-value shh ```

There are two ways to delete versioned data with vault kv: vault kv delete and vault kv destroy.

vault kv delete performs a soft deletion that marks a version as deleted and creates a deletion_time timestamp. Data removed with vault kv delete can be un-deleted by using vault kv undelete

For example, the latest version of a secret can be soft-deleted by simply running vault kv delete. A specific version can be deleted using the -versions flag.

$ vault kv delete secret/my-secret Success! Data deleted (if it existed) at: secret/my-secret

A version soft-deleted using vault kv delete can be restored with vault kv undelete

``` $ vault kv undelete -versions=2 secret/my-secret Success! Data written to: secret/undelete/my-secret

$ vault kv get secret/my-secret ====== Metadata ====== Key Value

createdtime 2018-03-30T22:18:37.124228658Z deletiontime n/a destroyed false version 2

====== Data ====== Key Value

my-value itsasecret ```

However, data removed by vault kv destroy cannot be restored.

$ vault kv destroy -versions=2 secret/my-secret Success! Data written to: secret/destroy/my-secret

The new K/V Secrets Engine supports a number of additional features including metadata and new ACL policy rules. For more information in the K/V Secrets Engine see the documentation.

Open Source Vault UI

With Vault 0.10 we have worked to open source the UI. Now all versions of Vault contain the UI, which enables core secrets management, encryption as a service, and identity & access management to be done natively within Vault's UI, as well as core system configuration tasks for managing the deployment of a Vault environment.

alt text

The UI has also been updated to include new extensions for managing Auth Methods and policies, allowing users who are more comfortable configuring Vault within a GUI to be able to manage RBAC for secrets and the configuration of their system.

For more, see the Introduction to Vault OSS UI tutorial.

Root DB Credential Rotation

Root/admin credentials for databases that are configured via the Combined DB Secrets Engine now support rotation. This functionality allows for credentials given to Vault to immediately be rotated upon Vault's configuration with a system, ensuring that Vault is the only system of record for configured databases' admin or root credentials. These can also be rotated on-demand at any time.

Root Credential Rotation is designed to minimize "secret sprawl" of admin credentials, minimizing the probability of this very sensitive data being used to gain undue access to a database. When combined with automation and scripting, credential rotation is intended to allow Vault to operate in highly secure environments where bootstrapping is automated and systems other than Vault maybe exposed to root credentials.

Root Credential Rotation is complementary with Dynamic Secrets. Dynamic Secrets enable Vault to create ephemeral credentials for clients on demand, but Vault itself needs a long lived credential for persistent access to the underlying system. Automating the rotation of this persistent connection improves the overall security posture of Vault.

Azure Auth Method

Vault 0.10 now supports integration with Azure Active Directory (or AAD) to allow Vault to authenticate Vault users with AAD machine credentials.

The Azure Auth Method is intended to allow applications hosted natively within Azure to streamline their authentication with Vault, simplifying the workflow for logging into a Vault environment and accessing data that is authorized to that application.

While the Vault 0.10 release focuses on Azure machine credentials, we are actively working with Microsoft to support other Azure Active Directory credential types for future versions of Vault.

For more information on the Azure Active Directory Auth Method, see Azure's blog Scaling Secrets in Azure: HashiCorp Vault speaks Azure Active Directory.

GCP Secrets Engine

The Vault team has been collaborating with Google on the development of secrets engines and auth methods to integrate with Google Cloud Platform. Vault 0.10 sees the continuation of our partnership with the release of the GCP Secrets Engine which allows Vault users to create dynamic GCP credentials for accessing Google Cloud Platform infrastructures. This allows short lived credentials to be given to applications that need to access GCP using the Dynamic Secrets pattern.

For more information on the GCP Secrets Engine, see Google's blog How to dynamically generate GCP IAM credentials with a new HashiCorp Vault secrets engine.

Other Features

There are many new features in Vault 0.10 that have been developed over the course of the 0.9.x releases. We have summarized a few of the larger features below, and as always consult the Changelog for full details.

  • Selective HMAC: Data sent to the audit log from backends within Vault can be configured to selectively HMAC values. This makes it easier for SIEMs to audit user activity across a network and application infrastructure, as well as for response teams to conduct chronological analysis of a security event.
  • Security Enhancements for Return Data: Fields which contain secret data in the API no longer return this data on read, minimizing the exposure of data stored within Vault.
  • New CLI and API Updates: The Vault CLI and API were updated to streamline common use cases within Vault and the configuration of Vault systems.
  • gRPC support for Backend Plugins: Plugins written for Storage Backends, Auth Methods, and Secret Engines now support gRPC, allowing them to be written in languages other than Go.
  • GCP Cloud Spanner Storage Backend: GCP Cloud Spanner is now a supported Storage Backend plugin.
  • ChaCha20-Poy1305 Support: The Transit Secret Engine now fully supports the ChaCha20-Poly1305 cryptography scheme, allowing for encryption, decryption, convergent encryption, and key derivation.
  • Replication Enhancements: Replication has been enhanced over the course of the 0.9.x timeframe. These enhancements include: the ability to encrypt the data contained in replication activation tokens as further defense in depth, changes to sys/health for DR secondaries, and enhancements to the behavior of seal-wrap with replication to wrap data stored on secondaries.

Upgrade Details

Vault 0.10 introduces significant new functionality. As such, we provide both general upgrade instructions and a Vault 0.10-specific upgrade page.

As always, we recommend upgrading and testing this release in an isolated environment. If you experience any issues, please report them on the Vault GitHub issue tracker or post to the Vault mailing list.

For more information about HashiCorp Vault Enterprise, visit Users can download the open source version of Vault at

For a free, 30-day trial of Vault Enterprise, visit

We hope you enjoy Vault 0.10!


Read in my feedly

Sent from my iPhone

Consul 1.0.7

Consul 1.0.7
// Hashicorp Blog

We recently released HashiCorp Consul 1.0.7 which includes a number of features and improvements.

Download Now

Here are some highlights from Consul 1.0.7:

  • Service metadata - You can now set key/value data during service registration that will be accessible during the full lifecycle of the service. Service metadata can be used to specify information such as the version of the service, weights or priorities for external load balancing, or other static information that may have previously been stored in the K/V datastore that should be attached to the service directly.
  • GZIP compression in HTTP responses - All HTTP API endpoints now support compression when requested by the client. This can improve performance when dealing with large API responses. Depending on your client this may be requested by default (such as a web browser) with the Accept-Encoding: gzip header. Additionally, HTTP endpoints now respond to OPTIONS requests.
  • near=_ip support for prepared queries - Prepared queries now support a special ?near=_ip lookup to sort results by proximity to an IP similar to how ?near=_agent sorts by proximity to the local agent (using network coordinates). This can be used in conjunction with Consul ESM for looking up results near an external service.

In addition to these new features, there are a handful of smaller bug fixes and improvements. Please review the v1.0.7 changelog for a detailed list of changes. Due to a change made for blocking queries on service-specific health and catalog endpoints, clients relying on undocumented behavior of X-Consul-Index in blocking queries may be affected. Read the 1.0.7 upgrade guide for more information.

As always, please test in an isolated environment before upgrading. Consul 1.0.7 is also available in Consul Enterprise Pro and Consul Enterprise Premium.

A special thank you to our active community members who have been invaluable in adding new features, reporting bugs, and improving the documentation for Consul.


Read in my feedly

Sent from my iPhone

HashiCorp Vagrant 2.0.4

HashiCorp Vagrant 2.0.4
// Hashicorp Blog

We are pleased to announce the release of Vagrant 2.0.4. Vagrant is a tool for building and distributing development environments. The highlight of this release is a new alias feature for the Vagrant CLI.

Download Now

This release of Vagrant includes a handful of bug fixes and improvements. Notable updates in this release include:
  • Support preference of system binaries on all platforms
  • Fix for unexpected timeouts during PowerShell detection
  • Patched cURL to fix adding local boxes
  • Updated Ruby version

Vagrant Aliases

This release also includes a new feature contributed by Zachary Flower (@zachflower). The alias feature allows users to alias commands to shorter or easier to remember commands. It also enables users to create new, custom commands for Vagrant by allowing execution of external commands. For more information about the alias feature, including an in-depth explanation of the features available and examples, please refer to the documentation page.

Other Improvements

The Vagrant 2.0.4 release also includes a number of other improvements and bug fixes. For a detailed list of all the changes in this release, see the CHANGELOG.


Read in my feedly

Sent from my iPhone

Facebook Cookbooks

Facebook Cookbooks
// Food Fight

Join Phil Dibowitz (@thephild) for a discussion of how Facebook uses Chef and to learn more about the Cookbooks they have published to GitHub and the Supermarket.



Show Notes

Post-show updates






The Food Fight Show is brought to you by Nathen Harvey and Nell Shamrell with help from other hosts and the awesome community of Chefs.

The show is sponsored, in part, by Chef.

Feedback, suggestions, and questions: or


Read in my feedly

Sent from my iPhone

Conference, Interrupted

Conference, Interrupted
// Food Fight

Winter storm Toby hit the Northeast US, including Baltimore, on Wednesday, March 21, 2018. DevOpsDays Baltimore was originally scheduled for Wednesday, March 21 and Thursday, March 22. The conference was moved to Thursday, March 22 and Friday, March 23. This move impacted everyone associated with the conference: attendees, sponsors, speakers, organizers, and vendors.

Join some of the organizers of DevOpsDays Baltimore 2018 as they host a learning review and share their experience about the impact Winter Storm Toby had on the conference.


Upcoming Chef Events

Show Notes

Winter Storm Toby Impact on DevOpsDays Baltimore 2018

Winter storm Toby hit the Northeast US, including Baltimore, on Wednesday, March 21, 2018. DevOpsDays Baltimore was originally scheduled for Wednesday, March 21 and Thursday, March 22. The conference was moved to Thursday, March 22 and Friday, March 23. This move impacted everyone associated with the conference: attendees, sponsors, speakers, organizers, and vendors.


There are four main constituents for the conference:

  1. Attendees
  2. Speakers
  3. Sponsors
  4. Vendors


All dates and times listed below are during March, 2018 and listed in Easter Daylight Time. Sub-bullets provide some additional commentary made possible with the benefit of hindsight bias.

  • 16 - 11:00 - Baltimore Slack - Did anyone happen to look at the weather next week for tuesday going in to wednesday? It's not supposed to be pretty
    • This is the first time we detected the possible situation.
  • 16 - 11:07 - Baltimore Slack - yeah, it's still too early to know and we'll have to monitor for how it really turns out (4th Nor'easter)- enough with the cold weather
    • Alert fatigue. The previous 3 nor'easters did not really have much of an impact on Baltimore, outside of wind.
  • 16 - 11:08 - Baltimore Slack - Do we have access to coat racks at IMET?
    • Coat racks ;)
  • 16 - 11:08 - Baltimore Slack - that will be a good question for our contact at the IMET - I don't think we had anticipated that considering the date
  • 16 - 11:29 - Baltimore Slack - this may be a crazy hail mary….but lets say the area gets slammed and roads aren't drivable, do we have the option to move from the 21-22, to 22-23rd? lets hope the storm passes by with no issues though :pray:
  • 16 - 12:10 - Baltimore Slack - Nothing is technically scheduled on the 23rd… that's really a worst case scenario I would think if we have to shift the conference 22-23rd. the one thing to note for the 21st is that IMET follows UMBC policy so if the university is delayed opening that would affect IMET as well
    • Turns out, the facility follows the University of Baltimore, not UMBC
  • 16 - 13:34 - Baltimore Slack - ah - good point…hopefully it's a non issues, but we should plan for it. If delayed X hours, do we just push everything back on the schedule X hours?
  • 16 - 13:43 - Baltimore Slack - probably can monitor for any delayed opening announcement (looks like the last delayed opening was back in February)
  • 17 - 10:36 - Baltimore Slack - conversation switches to planning for supplies required for running open spaces.
  • 19 - 9:36 - Baltimore Slack - @channel help? - a sponsor reached out to the sponsor liaison to inquire about what the plans were for inclement weather. Is there a make-up date? Are partial refunds an option?
    • a customer raised the priority of an ongoing incident
  • 19 - 9:37 - Baltimore Slack - Ugh I want to keep ignoring that, esp while it's in the 50s today :sob::sob: I don't think we're going to have to cancel, it's not supposed to get below freezing or start til Weds morning from what I'm seeing. Just may be a mess for people coming in Weds morning. But I guess we have to be prepared…
  • 19 - 9:53 - Baltimore Slack - remind me. was Friday 3/23 available at IMET?
  • 19 - 9:53 - Baltimore Slack - might be a good idea to get some sort of communication out that we're monitoring the weather situation and to mirror that IMET follows UMBC's policy. We'd need to nail that down with our contact there. There's nothing scheduled for 3/23 but when we originally scheduled he said we have flexibility as far as moving things out that day as it's open for load-out
  • 19 - 9:56 - Baltimore Slack - Let's get some comms out; Put out a simple poll from folk to see if pushing to Thurs/Friday is preferred to a total reschedule; we need to work a few things in parallel
    • Managing this incident begins in earnest.
  • 19 - 9:57 - Baltimore Slack - we can gather all of the data points and make a more informed decision….if all of the vendors say we're out of luck, that will help drive our decision
  • 19 - 10:02 - Baltimore Slack - Fwiw we should consider that folks have made travel arrangements and hotel bookings in our decision
  • 19 - 10:03 - Baltimore Slack - how many folks are in that boat?
  • 19 - 10:03 - Baltimore Slack - I know there's a handful of speakers, I can pull that number later. We should also ask Hotel RL if they tell us how many have used the discount code
  • 19 - 10:04 - Baltimore Slack - not sure we have time for later…can you pull now please. We'll need to make a call ASAP lots of moving parts
  • 19 - 10:04 - Baltimore Slack - I'm otr
    • Not all responders have full availability and capability to respond throughout the entire incident.
  • 19 - 10:06 - Baltimore Slack - Let's check with the facility: 1) Is it possible to move to Thurs/Friday (if needed) at no additional costs 2) Can you see if they have 2 days the last week of April as alternate dates?
  • 19 - 10:28 - Email - To sponsors, prefer one day or one month delay?
  • 19 - 10:28 - Baltimore Slack - just sent an email out to sponsors and cc'd the organizers list so we'll all see the responses
    • lots of conversation around gathering additional data, not captured here to keep the timeline concise.
  • 19 - 10:46 - Email - To speakers, prefer one day or one month delay? Ask for response by 11:45am, we are looking to make a decision by 12 noon
  • 19 - 11:09 - Baltimore Slack - there are a total of 8 speakers for 7 talks that we consider "Out of Radius"
  • 19 - 11:11 - Baltimore Slack - IF we push by 1 day…..its best to keep the Thurs schedule a planned but please reach out to those on the Wed schedule and see if they could do Friday
  • 19 - 11:13 - Baltimore Slack - catering is willing to work with us on whatever we end up doing. they've ordered all the product but they'd see what they could return if we wait til april or they could make thurs/fri work. have to make a decision if we're going to push by tuesday evening since they'd start baking that night for day 1 breakfast
    • One vendor covered. New deadline for a decision.
  • 19 - 11:13 - Baltimore Slack - its almost unanimous that speakers want a 1 day push vs 1 month
    • Another constituent covered.
  • 19 - 11:14 - Baltimore Slack - Are the parking vouchers tied to our dates?
  • 19 - 11:21 - Baltimore Slack - Poll organizers. Should we push one day or one month? 9 votes for one day, 0 votes for one month
    • Project fatigue? We're this close, let's ship this thing as soon as we can.
  • 19 - 11:21 - Baltimore Slack - Alright here's the plan for speakers. I'm holding off until an official decision by this group - by Noon per emails - to reach out with next steps. If we decide to keep it this week and offset by a day my plan is to check all speakers' availability and come up with a new schedule for thursday/friday to maximize speakers' availability. It will be essentially a case by case basis.
  • 19 - 11:28 - Baltimore Slack - Speakers - want a 1 day push; Organizers - want a 1 day push; Sponsors - want a 1 month push; Vendors - TBD (waiting on venue, food, AV, etc)
    • Status update. Still have conflicting desires and incomplete picture.
  • 19 - 11:28 - DevOpsDays Organizer Slack - Have any of you had to deal with the possibility of inclement weather impacting your conference? We find ourselves in that state in Baltimore this morning and I would appreciate hearing a bit more about your experience and how you handled it.
  • 19 - 11:31 - Baltimore Slack - should we send out an email to general attendees on if they can make a one day shift? maybe sending out a quick poll. I think if the people are there, sponsors will stay
  • 19 - 11:32 - Baltimore Slack - I'd be opposed to asking every attendee. The more people we ask, the more likely we are to disappoint people that we didn't go with their selection
    • Unclear the best actions to take, maybe even disagreement
  • 19 - 11:32 - DevOpsDays Organizer Slack - NYC had to move once because of a hurricane
  • 19 - 11:33 - Baltimore Slack - $25,000 worth of sponsors want the conference to be pushed to April. $15,000 worth of sponsors want to push it by a day. Have heard from 12 out of 28 sponsors so far
  • 19 - 11:41 - Baltimore Slack - Trying to organize our info so people don't have to read through all the back chat to catch up - Google Doc created
    • Slack is no longer working for the team, new communication channel brought into the mix.
  • 19 - 11:49 - Baltimore Slack - Party Plus is ok if we have to push to Thursday/Friday (need to get confirmation with our AV vendor if that's possible)
    • Another vendor covered.
  • 19 - 11:51 - Baltimore Slack - 88% of registrants have a zip code in this area…not surprisingly: this is a "local" conference.
    • More data gathered
  • 19 - 11:53 - Baltimore Slack - from my perspective… the weather doesn't look that bad right now. if my experience at UMBC is of any value, they literally never close for anything; so it's incredibly unlikely the venue will be straight closed
  • 19 - 11:54 - Baltimore Slack - right- that's my thought as well… possibility of a delayed start at worst; it could be an issue for attendees coming in, especially if they're north/west of the city
  • 19 - 11:57 - Baltimore Slack - erring on the side of safety and making a call well in advance I'm still in favor of delaying one day vs. waiting to see if the university delays or closes.
  • 19 - 11:59 - Baltimore Slack - sounds like the only thing we still need is the AV and Venue confirmation @kchung? assuming those are good….looks like the call is to push 1 day
  • 19 - 12:00 - DevOpsDays Organizer Slack - if attendees' kids are home from school on a snow day, they aren't gonna attend
  • 19 - 12:01 - Baltimore Slack - we should also consider the impact on parents. if school gets canceled for their kids, they're more likely to skip the conference and stay home with the kids….probably more likely than skipping work
  • 19 - 12:02 - Baltimore Slack - our contact might still be out dealing with a family emergency- seeing if there's an alternate contact for IMET as I haven't been able to get a hold of him
    • Still not clear on the venue. Contributing factor: our main point of contact was out of the office
  • 19 - 12:07 - Baltimore Slack - On another but related note, our speaker dinner is currently set for tomorrow night - we should consider shifting that or potentially canceling it. Any thoughts from the group?
  • 19 - 12:09 - Baltimore Slack - a few more sponsor votes have come in: $29,000 worth of them vote for April, $24,000 vote for a 1-day push. So, the gap is closing
  • 19 - 12:47 - Baltimore Slack - AV is likely out for Friday (equipment they have for us is in use on Friday); they're looking into the options they have to have alternate equipment available for us
  • 19 - 12:58 - Baltimore Slack - AV would need to rent additional equipment (so there would be additional charges for us to do Friday) but they can make it happen
    • Another vendor covered.
  • 19 - 13:04 - Baltimore Slack - sounds like we can make this happen this week. Venue can support, and AV can support (at a higher cost - TBD).
  • 19 - 13:18 - Baltimore Slack - Did I miss the venue saying they were good?; I'm literally outside imet right now, I can go in and ask.
  • 19 - 13:26 - Baltimore Slack - Okay, the guard took down my info. She's going to call our contact plus his backups. She can't give me contact info for his backups for privacy reasons. She understands it's urgent and is going to keep trying to reach them
  • 19 - 13:30 - Baltimore Slack - they're all out of office for spring break
    • most of the people we need to reach at the venue our out on spring break
  • 19 - 13:42 - DevOpsDays Organizer Slack - If you need a lifeline on a speaker, happy to help if my schedule allows
  • 19 - 13:53 - Baltimore Slack - Just FYI we officially have more sponsors voting for just pushing it by a day rather than moving it to April. $34,000 worth of sponsors want to move it a day, $30,000 want to move to April. 18 out of 28 sponsors have replied
    • Another vendor "covered" - Majority of sponsors agree to one day push
  • 19 - 14:11 - Baltimore Slack - national weather service has lowered their snowfall totals
  • 19 - 14:15 - Baltimore Slack - VP at umbc said he'll try to get an answer within an he for us or let us know if he can't track down the right people
  • 19 - 14:22 - Baltimore Slack - Let's official push the conf 1 day - @channel….that was the last piece of info we needed. Is anything else outstanding that is preventing us from doing this?
  • 19 - 14:42 - Baltimore Slack - OK - the change to 1 day is happening ….lets pull the trigger on all fronts and get the word out!
  • 19 - 14:42 - Baltimore Slack - 100% official pushing it back 1 day
  • 19 - 14:57 - Email to all Attendees - DevOpsDays Baltimore is Being Delayed!
  • 19 - 15:56 - Website - Updated to include new logo and announcement of new dates
  • 19 - 16:26 - DevOpsDays Organizer Slack - For those of you following along, we've decide to push the conference back by one day
  • 19 - 16:31 - Twitter - @devopsdaysbmore tweets the delay -
  • 19 - 16:44 - DevOpsDays Organizer Slack - rt from the global @devopsdays ..
  • 20 - 18:00 - University of Baltimore closes for 21-March
  • 22 - 9:00 - DevOpsDays Baltimore Begins
  • 23 - 9:15 - DevOpsDays Organizer Slack - DevOpsDays Baltimore live stream is on now -; at 11:30 EDT we will be hosting (and streaming) a panel of organizers doing a learning review of how we handled the winter storm and having to move our conference back a day with very short notice.
  • 23 - 11:32 - DevOpsDays Organizer Slack - Great learning review live streaming now
  • 23 - 16:30 - DevOpsDays Baltimore Ends


Time to detect and time to resolve are typical timings when conducting a learning review. In this case, they were both predictive of the actual incident so the values would be negative.

Other timings that are interesting are based on the Observe, Orient, Decide, Act (OODA) loop.

  • Observe - March 16, 11:00
  • Orient - March 19, 9:57 - starting gathering data 2 days, 22 hours, 57 minutes after first observing the event.
  • Decide - March 19, 14:22 - 4 hours, 25 minutes after starting to orient, 3 days, 3 hours, 19 minutes after observing the event.
  • Act - March 19, 14:57 - 35 minutes after deciding, 5 hours after orienting, 3 days, 3 hours, 54 minutes after observing.

Contributing Factors

Things that contributed to our response.

  • University on Spring break
    • UMBC v. U of Baltimore
  • Facility coordinator had a family emergency
  • Alert Fatigue
  • Micro-climates
  • We did not plan for failure

Success Factors

Things that made our response more successful.

  • Locally optimized conference
  • Teamwork

Initial email to sponsors

1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  
*Subject* Response Needed: DevOpsDays Baltimore Inclement Weather Possibility    Hi everyone!    If you've been keeping an eye on this week's weather, you've likely noticed that the Baltimore area is expecting a cold front and some wet weather on Wednesday.    Our venue is run by UMBC and follows the University's delay and closure policies, so we are closely monitoring the weather system and how it may impact DevOpsDays Baltimore. If the Columbus Center is closed on Wednesday, we have two possible options:    1. Pending venue approval, we could push the conference back one day and hold it on Thursday, March 22nd and Friday, March 23rd instead of Wednesday and Thursday.    2. We could push the conference out to April to give attendees and sponsors the chance to make new travel arrangements, etc.    Our organizer team is all hands on deck to find a solution, should the venue close on Wednesday. We are still in the information-gathering stage at the moment, and because our sponsors are the driving force of this conference, we wanted to loop you all in as soon as possible to solicit your opinions.    So, please respond with your preference: if the Columbus Center closes on Wednesday, do prefer that we move the conference back by one day, or that we push it to April?    We sincerely appreciate your commitment to supporting Baltimore's tech community, and we're doing everything we can to ensure that the conference is a success. We will keep you all in the loop as we receive more information.    Thank you!







Recent video birthdays:


The Food Fight Show is brought to you by Nathen Harvey and Nell Shamrell with help from other hosts and the awesome community of Chefs.

The show is sponsored, in part, by Chef.

Feedback, suggestions, and questions: or


Read in my feedly

Sent from my iPhone