Wednesday, November 15, 2017

Russian 'Fancy Bear' Hackers Using (Unpatched) Microsoft Office DDE Exploit



----
Russian 'Fancy Bear' Hackers Using (Unpatched) Microsoft Office DDE Exploit
// The Hacker News

Cybercriminals, including state-sponsored hackers, have started actively exploiting a newly discovered Microsoft Office vulnerability that Microsoft does not consider as a security issue and has already denied to patch it. Last month, we reported how hackers could leverage a built-in feature of Microsoft Office feature, called Dynamic Data Exchange (DDE), to perform code execution on the

----

Read in my feedly


Sent from my iPhone

Firefox 57 "Quantum" Released – 2x Faster Web Browser



----
Firefox 57 "Quantum" Released – 2x Faster Web Browser
// The Hacker News

It is time to give Firefox another chance. The Mozilla Foundation today announced the release of its much awaited Firefox 57, aka Quantum web browser for Windows, Mac, and Linux, which claims to defeat Google's Chrome. It is fast. Really fast. Firefox 57 is based on an entirely revamped design and overhauled core that includes a brand new next-generation CSS engine written in Mozilla's Rust

----

Read in my feedly


Sent from my iPhone

17-Year-Old MS Office Flaw Lets Hackers Install Malware Without User Interaction



----
17-Year-Old MS Office Flaw Lets Hackers Install Malware Without User Interaction
// The Hacker News

You should be extra careful when opening files in MS Office. When the world is still dealing with the threat of 'unpatched' Microsoft Office's built-in DDE feature, researchers have uncovered a serious issue with another Office component that could allow attackers to remotely install malware on targeted computers. The vulnerability is a memory-corruption issue that resides in all versions of

----

Read in my feedly


Sent from my iPhone

How to Host a Deep Web IRC Server



----
How to Host a Deep Web IRC Server
// Null Byte « WonderHowTo

Internet Relay Chat, or IRC, is one of the most popular chat protocols on the internet. This technology can be connected to the Tor network in order to create an anonymous and secure chatroom, without the use of public IP addresses. IRC servers allow one to create and manage rooms, users, and automated functions, among other tools, in order to administer an instant messaging environment. IRC's roots began in 1988 when Jarkko Oikarinen decided to attempt to implement a new chat protocol for users at the University of Oulu, Finland. Since then, it's been widely adopted and used as a lightweight... more


----

Read in my feedly


Sent from my iPhone

How to Host a Deep Web IRC Server for More Anonymous Chatting



----
How to Host a Deep Web IRC Server for More Anonymous Chatting
// Null Byte « WonderHowTo

Internet Relay Chat, or IRC, is one of the most popular chat protocols on the internet. This technology can be connected to the Tor network in order to create an anonymous and secure chatroom, without the use of public IP addresses. IRC servers allow one to create and manage rooms, users, and automated functions, among other tools, in order to administer an instant messaging environment. IRC's roots began in 1988 when Jarkko Oikarinen decided to attempt to implement a new chat protocol for users at the University of Oulu, Finland. Since then, it's been widely adopted and used as a lightweight... more


----

Read in my feedly


Sent from my iPhone

pfSense 2.4.1-RELEASE Now Available



----
pfSense 2.4.1-RELEASE Now Available
// Netgate Blog

We are excited to announce the release of pfSense® software version 2.4.1, now available for new installations and upgrades!


----

Read in my feedly


Sent from my iPhone

A Great Time to Take A Look at XenServer Enterprise!



----
A Great Time to Take A Look at XenServer Enterprise!
// Latest blog entries

Good afternoon everyone,

As we make our way through the last quarter of the year, I wanted to remind the community of the significant progress the XenServer team has achieved over the last 18 months to make XenServer the awesome hypervisor that it is today!

While many of you have been making the most of your free XenServer hypervisor, I would like to take this opportunity to review just a few of the new features introduced in the latest releases of the Enterprise edition - features that our customers have been using to optimize their application and desktop virtualization deployments.

For starters, we've instrumented automated updates and live patching, features that streamline the platform upgrade process by enabling multiple fixes to be installed and applied with a single reboot and in many cases, no reboot whatsoever, significantly reducing downtime for environments that require continuous uptime.

We've also worked with one of our partners to introduce a revolutionary approach to securing virtual workloads, one that is capable of scanning raw memory at the hypervisor layer to detect, protect and remediate against the most sophisticated attacks on an IT environment. This unique approach provides an effective line of defense against viruses, malware, ransomware and even root kit exploits. What's more, this advanced security technique complements security mechanisms already implemented to further strengthen protection of critical IT environments.

Providing a local caching mechanism within the XenServer hypervisor enables our virtual desktop customers to dramatically improve the performance of their virtual desktops, particularly during boot storms. By caching requests for OS image contents in local resources (i.e., memory and storage), XenServer is able to work with Provisioning Services to stream contents directly to virtual desktops, reducing resource utilization (network and CPU) while enhancing user productivity.

Expanded support for virtual graphics allows our customers to leverage their investments in hardware from the major graphics vendors and enable GPU-accelerated virtual desktops that effectively support graphics-intensive workloads.

Designing, developing and delivering features that bring out the best in virtualization technologies... that's our focus. And thanks to the invaluable insight and feedback provided by this community, will continue to be the driving force behind our innovation efforts.

Interested in evaluating the features described above, click here.

Until next time,

Andy

 


----

Read in my feedly


Sent from my iPhone

ChefConf 2018 Call for Presenters is Open



----
ChefConf 2018 Call for Presenters is Open
// Chef Blog

ChefConf is the largest community reunion and educational event for teams on the journey to becoming fast, efficient, and innovative software-driven organizations. In other words, you and your team!

ChefConf 2018 will take place May 22-25 in Chicago, Illinois and we want you to present! The ChefConf call for presenters (CFP) is now open.

ChefConf attendees are hungry for learning and sharing and are eager to hear of your success, experiments, failures, and learnings. Share how you have adopted new workflows, tools, and ways of working as a team. Describe your journey toward becoming outcome-oriented. What have you done to improve speed, increase efficiency, and reduce risk throughout the system? Continuous learning is the name of the game and your experiences are worth sharing!

CFP Basics

Deadline: Wednesday, January 10, 2018 at 11:59 PM Pacific time.

Track themes:

  • Infrastructure Automation
  • Compliance Automation
  • Application Automation
  • People, Process, and Team
  • Delivering Delight
  • Chaos Engineering
  • Don't label me!

Full descriptions of each track can be found on the ChefConf site.

Why Submit a Talk?

ChefConf is the largest gathering of the Chef community. Community is driven by sharing: stories, experiences, challenges, successes, and everything in between. By presenting at ChefConf you are supporting the growth and health of the Chef community.

ChefConf is an ideal platform to spotlight an awesome project you and your team have delivered. Giving insight into your challenges, success, and knowledge will inspire others to take Automation, DevOps, and Site Reliability even further.

Are you trying to build your own brand or speaker profile? ChefConf gives you a great opportunity to expand your marketability, while helping others do the same.

Take a "talk-driven development" approach and propose a session on something in the Chef universe you are keen to learn more about. This approach will give you even more motivation to learn something new and share it with the community.

There a numerous other reasons to submit, from exercising your storytelling skills to taking advantage of the myriad speakers-only swag and green room amenities. Whatever your motivation, we cannot wait to see your proposal!

What Makes for a Good Proposal?

Be clear, be concise, and be compelling. We received hundreds of submissions for 2017, so brevity is appreciated and ensures your submission will be given thorough consideration.

The best abstracts include a title that clearly states the topic in an interesting way and complete information on the topic and type of talk. If a demo will be involved, let us know and describe it. Bringing in a co-presenter? That's awesome and should be detailed in the proposal.

We've shared details on each track on the ChefConf website. The ChefConf team is also happy to help with your submission, just email us at chefconf@chef.io.

ChefConf 2018 will be here before you know it — we hope to see you presenting in Chicago!

Submit your proposal now.

The post ChefConf 2018 Call for Presenters is Open appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Chef, Terraform, vCenter and Windows



----
Chef, Terraform, vCenter and Windows
// Chef Blog

I recently posted about using Terraform, vCenter and Chef, and promised a follow up about extending the Terraform plans to work with Windows and multiple Virtual Machines. Here is part two that will continue to build off of this plan: https://gist.github.com/jjasghar/f061f493ad8f631a6d4b5b5085c7cb35

Windows

Open up the following link to show off the .diff that has Windows working with the above Terraform plan.

https://gist.github.com/jjasghar/dbd7348240f23a26a31cd7a02dcb4267

If you don't know how to read diffs, the green or + lines are lines I've added, while the red or - are lines I have removed.

With a successful Linux VM build, the next natural progression is to get Windows working with the same base plan. There is some prep work that is required down this path and I chose the one that made the most sense for me. You'll need to figure out what is the correct path for you here, and it will probably require talking to your VMware Administrators and Windows Licensing people to make sure you're in compliance with your environments rules.

I don't use Customization Specs very heavily so I'm going to move past it. If you do use them, you'll need to first remove this line, that skips customization specs and just request the template in it's base form.

+ skip_customization = true

If you don't have this line, and no customization spec, you'll find that Terraform syspreps the machine, and can wipe out WinRM settings. For me, this was a gotcha that caused a long delay in figuring out how to get Terraform to request a Windows VM. These settings are required for Chef to be bootstrapped into your Windows VM, you need to enable and set some WinRM settings. Taken from the winrm-cli README, you need to set the following at a PowerShell prompt:

winrm quickconfig # say y here ;)  winrm set winrm/config/service/Auth '@{Basic="true"}'  winrm set winrm/config/service '@{AllowUnencrypted="true"}'  winrm set winrm/config/winrs '@{MaxMemoryPerShellMB="1024"}'

Take either a snapshot of the Virtual Machine, or create a template out of it, and then change this line to that name.

+ template = "template-windows2012R2"

The next thing to take notice of is the diff of the variables.tf. Scroll down to the next file and look at Lines 16 to 25. You'll see the changes that are required to talk to the template I created.

-    # Default to SSH  -    connection_type     = "ssh"  +    # Default to winrm  +    connection_type     = "winrm"       # User to connect to the server  -    connection_user     = "admini"  +    connection_user     = "Administrator"       # Password to connect to the server  -    connection_password = "admini"  +    connection_password = "Admini@"

Notice the WinRM change, for the connection_type the user of Administrator, and my super secure Admini@ local administrator password. If you scroll back up to Line 55 in the diff you'll notice one more setting I set for the connection.

+ https = false

For the settings for WinRM above, we connect over http and by default Terraform connects over https. This option forces the correct protocol.

Just to make sure this is clear, the final major change is the middle stanza of lines 22 to 38.

-  # my template is broken, but as you can see here's some pre-chef work done :)  -  provisioner "remote-exec" {  -    inline = [  -      "sudo rm /var/lib/dpkg/lock",  -      "sudo dpkg --configure -a",  -      "sudo apt-get install -f",  -      "sudo apt autoremove -y",  -      # This was interesting here, i needed to add a host to /etc/hosts, this injects the sudo password, then tee's the /etc/hosts  -      "echo admini | sudo -S echo '10.0.0.15 chef chef.tirefi.re' | sudo tee -a /etc/hosts"  -    ]  -    connection {  -      type          = "${var.connection_thingys.["connection_type"]}"  -      user          = "${var.connection_thingys.["connection_user"]}"  -      password      = "${var.connection_thingys.["connection_password"]}"  -    }  -  }  -  

Not having specific Ubuntu available commands and bash on my Windows VM, these commands would ultimately fail on run. I am pretty confident that you can run PowerShell commands instead of bash here, though to be honest I haven't tried it. Reading the remote-exec docs it seems that The remote-exec provisioner supports both ssh and winrm type connections which at least imply it can run whatever commands you type there.

Multiple machines

Now that we've walked through creating a virtual machine of both Linux and Windows based operating systems, we need to figure out how to make multiple machines spin up at once. This was surprisingly easy as soon as the base understanding of the .tf plans are put together. Lets take a look at this diff to show how to extend the base plan to 3 virtual machines.

Looking first at the variables.tf diff, you'll notice we added three lines for a default number of 3 and called it count. This should be pretty straight forward, and if you needed to override it, you know how to from my previous post. Because we don't know how many nodes we want, we have to remove line 18 and 19 so we don't hard code the default name now.

-    # A default node name  -    node_name_default       = "terraform-1"

Scroll back up to the main diff file, you'll see that if you give the vsphere_virtual_machine resource a count it will create that many machines, as we do on line 8. There are a lot of examples on how to name the machines, I chose from:

variable "count" {    default = 2  }  resource "aws_instance" "web" {    # ...    count = "${var.count}"    # Tag the instance with a counter starting at 1, ie. web-001    tags {      Name = "${format("web-%03d", count.index + 1)}"    }  }

…and edited it to my liking on line 12. This way, my machines will now be called terraform-0X where X is the number of the machine count we created. Finally, line 20 and 21 create the node objects on the Chef Server with the same name of the machine inside vCenter, which helps keep things in line.

Clean up

If you've played with these Terraform plans, the easiest way is to run terraform destroy to nuke the machines from vCenter. It won't delete the node objects from the Chef Server. That's by design and the reason why we have the recreate_client line set to true.

I hope this helps bootstrap your use cases of Terraform, vCenter, and Chef, and makes your path to success that much easier. Starting off from nothing with these three technologies can be hard. Start with these simple examples and you'll find yourself able to do more advanced things before you know it.

The post Chef, Terraform, vCenter and Windows appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Terraform with vCenter and Chef



----
Terraform with vCenter and Chef
// Chef Blog

If you're just starting out with Terraform, you may feel overwhelmed by all the different options and settings. Read on to learn how I set up Terraform with vCenter and Chef. Hopefully, this guide will make your journey a bit easier, and provide some context around Terraform .tfconfig files. The following links are two examples of .tf files that work in my lab. These files have comments to help explain what's going on. I'll be taking a deeper dive in this post, so I suggest either having the GitHub gist open with this post beside it or pulling the files down to your favorite text editor.

https://gist.github.com/jjasghar/f061f493ad8f631a6d4b5b5085c7cb35

Assuming you clicked on the link you'll notice that it's called one_machine_chef_vcenter.tf and variables.tf.

First, scroll all the way to the bottom of the gist, and find thevariables.tf.

Quick Links

variables.tf

one_machine_chef_vcenter.tf

variables.tf

This might seem a bit backwards,  but the best practice is to have everything that can be overwritten in a variables.tf file. Terraform pulls everything in a directory when you run,terraform apply so as a human it's nice to have good labeling for a file that you know are variables and could be manipulated if needed.

If you git pulled, saved, or copypasta'd these files down, the first thing you have to do is put them in their own directory. This was the "gotcha" that I missed going through the tutorial and it's good to highlight here. Terraform creates a few other files when you run the commands, and if you don't have it in it's own directory you can clobber things when Terraform processes, and that's never good.

OK, now that we have the files, let's open the variables.tf and walk through it.

VMware Connection Settings

# Main login, defaults to my admin account  variable "vsphere_user" {    default = "administrator@vsphere.local"  }    # Main password, defaults to my admin account  variable "vsphere_password" {    default = "Good4bye!"  }    # vCenter server  variable "vsphere_server" {    default = "vcenter.tirefi.re"  }    

Lines 1 through 14 contain the username, password, and server you want to drive. Putting the password in the file is a pretty dumb idea, but this was an example, so let's say we want to inject it at terraform plan instead. Remove lines 6 through 9 and add something like this: terraform plan -var 'vsphere_password=Good4bye!' This will inject the password in the plan phase, which is a bit more secure. There are many other options, if you want to dig farther into the docs, this this link is the base usage while this link is a much deeper dive of the variables. As you can see, there are a ton of things you can do here, but let's iterate instead of biting it off all at once.

Chef Connection Settings

# My connections for the Chef server  variable "chef_provision" {    type                      = "map"    description               = "Configuration details for chef server"      default = {      # A default node name      node_name_default       = "terraform-1"      # Run it again? You probably need to recreate the client      recreate_client         = true      # Chef server :)      server_url              = "https://chef.tirefi.re/organizations/tirefi/"      # SSL is lame, so lets turn it off      ssl_verify_mode_setting = ":verify_none"      # The username that you authenticate your chef server      user_name               = "admini"      }  }  

Lines 16 through 33 are my connection to the Chef server I have and some default settings so I don't have to worry about it. Having sane defaults is always a good idea, and these are mine. You can override them just like we did with removing password from the file, as an experiment I suggest you giving that a shot. The most interesting line in this stanza is line 29, ssl_verify_mode. As a practitioner of Chef you've probably set this mode on your clients with your self-signed Chef server multiple times. This option is deep in the docs, and this is the line that set it to your setting you know you use. As you can see it's in " " as a string that is passed up to the client.

Instance Connection Settings

variable "connection_thingys" {    type                      = "map"    description               = "Configuration details for connecting to the remote machine"      default = {      # Default to SSH      connection_type     = "ssh"      # User to connect to the server      connection_user     = "admini"      # Password to connect to the server      connection_password = "admini"    }  }  

Because I'm horrible at naming things, lines 35 through 47 are my "connection_thingys". In order to get into the machine Terraform needs a way to either SSH (in this case) or WinRM into the machine. Setting my default username and password for my templates and allow them to be overwritten for my future self. Just a tidbit of fun here, all my admin accounts are called admini because Ubuntu won't let you have admin, and I hate passwords so I use admini as the password. For CentOS, I use root/admini to be as consistent as I can be. So if I wanted to bootstrap a CentOS machine I would do something like: terraform plan -var 'connection_thingys.connection_user=root' The astute user would notice that line 36 in the one_machine_chef_vcenter.tf has the template hardcoded so I would have to add it, but for brevity sake this is the gist.

one_machine_chef_vcenter.tf

Scroll back up or open up one_machine_chef_vcenter.tf now, this is the meat of the integration. It's pretty verbose, which is a dealers choice of good or bad. For showing this off to someone, I think it's really nice but alas, figuring it out by hand is a test of patience.

vSphere Provider Settings

# Configure the VMware vSphere Provider  provider "vsphere" {    user           = "${var.vsphere_user}"    password       = "${var.vsphere_password}"    vsphere_server = "${var.vsphere_server}"    # if you have a self-signed cert    allow_unverified_ssl = true  }  

Because we are going to be connecting to vCenter, we need to talk to the vSphere provider. We can talk about why they are named differently, but that's a whole other blog post. Lines 1 through 8 are the variablized connection options. Because I didn't namespace them, and they are at the "root" of thevariables.tf file they are just called with ${var.blah} this is helpful to know as you start making your own. Because I don't have anything other than self-signed certs, line 7 is how you connect to the vCenter instance with a self signed cert.

The rest of the file is the declaration of the one machine. Yes, it's only one, which seems a tad bit overkill, but as I say above it's verbose and does exactly what you'd like to do. There are three main stanzas is this section, a resource stanza which you can probably guess is the actual machine, and two the provisioner stanzas. We'll start with the resource then talk about the two provisioners.

The resource declaration is line 11 through 39. We tell the resource that we want a vsphere_virtual_machine, which tells it to create a VM object in vCenter. If you click on this link you can see the official documentation, and more interestingly, take a look at the left-hand sidebar. There are all of these different resources that you can declare, so in theory you could spin up a whole infrastructure in your VMware environment, and tear it down with two commands. I challenge you to attempt this after you get the basics of just one machine down.

# Create a virtual machine within the folder  resource "vsphere_virtual_machine" "terraform_01" {    # name of the machine inside vCenter   name = "terraform-1"  

Line 13 is the name of the machine you want inside vCenter. If you have some standardization of naming conventions this is where you'd add this though as you get more comfortable with this maybe create this as a variable? You could extend this and run it on the command line if you had to do multiple of these.

  # DNS inside the machine    dns_suffixes = ["tirefi.re"]    # Domain instead of default vsphere.local    domain = "tirefi.re"  

Lines 14 through 17 are the DNS entries. If you want the DNS inside the VM to be set, this is a way for the drive to do it, line 15 is has to be a list. This is something like ["tirefi.re","example.com"] but as you can see, I only use tirefi.re. By default, the driver adds vsphere.local as the default external DNS, if you want to force a different domain, line 17 is how to do it.

  # What datacenter to connect to    datacenter = "Datacenter"    # How many vCPUs    vcpu   = 2    # How much memory in MBs    memory = 4096    # Create it in this resource pool    resource_pool = "terraform"    # Linked clones are the best clones, don't forget to create the snapshot    linked_clone = true  

Lines 18 through 27 are where you'd like to put it inside of vCenter. My datacenter's name is "Datacenter", if you had multiple DC objects, like "West" and "East", in vCenter you could force it here by changing it to "East" or "West." Maybe this is another candidate for a variable? If you have multiple DCs this could be nice to just flip a variable to get this built in any of them.

If you need to bump the size of your cloned template lines 20 through 23 is a way to do it. I think these are my default settings from the template, but this at least shows this off. Line 24 and 25 is the resource pool that you want to put it in. This is required if your `default resource pool resolves to multiple instances, please specify` happens. Here I just created a resource pool called terraform and started to throw machines into it. I should mention here, if you figure out how to do the main root of the Datacenter I'd love to hear it, please reach out to me.

Lines 26 and 27 aren't required, but you should be using linked clones if you aren't. Assuming your template has at least one snapshot, this will clone the template via linked clone and the newest snapshot. It's significantly faster, per my test I show here:

vsphere_virtual_machine.terraform_01: Creation complete after 3m24s (ID: terraform-1) # as a normal clone  vsphere_virtual_machine.terraform_01: Creation complete after 1m27s (ID: terraform-1) # as a linked clone
  network_interface {      # What network you want to connect to      label = "Internal Network 3208"    }  

Now that you have the machine declared, you probably need a place to put on the network, lines 29 through 32 is an example of how to do it. My main network from my VMs is called "Internal Network 3208" so that's how it creates and connections the vNIC to the network.

  disk {      # What template to clone      template = "template-ubuntu1604"      # What datastore to create it in      datastore = "vsanDatastore"    }  

Lines 34 to 39 is the disk setup. This is the bare minimum you need to create what is needed for your VM to work, which are the template and the datastore you want to inject it into. As you can see, the template line is self-explanatory, but line 38 is more interesting. My name of my data store is "vsanDatastore" which, yes, is a vSAN datastore. So the driver allows you to leverage vSAN if you use it, which was a pleasant surprise and genuinely unexpected.

Now that we have fully declared the machine including networking and disk, you could remove the rest of the file and run this. That's pretty neat, a repeatedly variablized plan to always request vCenter for exactly what you want. But I like the next stanzas more, this is where the post provisioning requests start to shine.

Remote-Exec Provisioner Settings

Let's continue by talking about the first section lines 42 through 56. This provisioner called remote-exec, as you can assume, runs arbitrary commands on the remote machine.

   inline = [        "sudo rm /var/lib/dpkg/lock",        "sudo dpkg --configure -a",        "sudo apt-get install -f",        "sudo apt autoremove -y",        # This was interesting here, i needed to add a host to /etc/hosts, this injects the sudo password, then tee's the /etc/hosts        "echo admini | sudo -S echo '10.0.0.15 chef chef.tirefi.re' | sudo tee -a /etc/hosts"      ]  

Being this demo is a Ubuntu box and has bash, line 43 to 50 are just bash commands. You can also use powershell, if you have powershell access or on a Windows template, but I'm on Ubuntu here so bash is all I have. If you take a look at the following commands you'll see that I've had trouble with locked dpkgs in the past, so this is me force unlocking it, and installing any broken dependency packages, including auto removing any hanging packages.

Line 49 has probably caught your eye. This is the only way I could figure out how to inject something leveraging Terraform to the /etc/hosts on initial post provisioning. Like all DNS administrators know, DNS is always more challenging to work with then you'd like, so I added this line to force where my Chef Server IP is located. Let's take a detour and a quick explanation of this specific command:

echo admini | sudo -S echo '10.0.0.15 chef chef.tirefi.re' | sudo tee -a /etc/hosts

First the echo is the sudo password for my user admini. The middle pipe is runs sudo with the read password from standard input flag -S on it, and echos the IP short name and fully qualified domain name. The third pipe runs sudo again with the cached password and appends the /etc/hosts file using tee -a. Yes, this seems like a lot, but the beauty of this example is it now shows you how to inject arbitrary lines on post provisioning with sudo. It'll be worth playing with this chain of commands to wrap your head around this. It'll help you a lot as you get more advanced and need to do these type of "one off" post provisioning commands.

    connection {        type          = "${var.connection_thingys.["connection_type"]}"        user          = "${var.connection_thingys.["connection_user"]}"        password      = "${var.connection_thingys.["connection_password"]}"      }  

Line 51 to 55, is the way to SSH into the machine. It pulls in from the variables.tf and injects them in there.

Chef Provisioner Settings

provisioner "chef" {      server_url              = "${var.chef_provision.["server_url"]}"      user_name               = "${var.chef_provision.["user_name"]}"      # I couldn't figure out how to put the userkey as a variable, so you'll need to change this      user_key                = "${file("/Users/jjasghar/repo/vmware_playground/pems/admini.pem")}"      node_name               = "${var.chef_provision.["node_name_default"]}"      # Here's a inital run_list :)      run_list                = ["recipe[base]"]      recreate_client         = "${var.chef_provision.["recreate_client"]}"      on_failure              = "continue"      ssl_verify_mode         = "${var.chef_provision.["ssl_verify_mode_setting"]}"        connection {        type          = "${var.connection_thingys.["connection_type"]}"        user          = "${var.connection_thingys.["connection_user"]}"        password      = "${var.connection_thingys.["connection_password"]}"      }    }  

The next provisioner stanza is from line 58 to 75 and is how to bootstrap Chef into the virtual machine. If you use Chef, all of these settings should seem extremely familiar. I won't go through them all, but I'll highlight the possible "gotchas." Line 62 is how you can get the .pem in as your username from line 60. It seems that Terraform doesn't like this to be interpolated so this was the only way I could figure out how to get it to work. If you have an initial base cookbook, line 65 is where you can declare the run list for it. The connection section uses the same settings as the remote-exec which, needless to say, is pretty useful because we can just declare it once via the variables.tf.

With all of these settings, if you want to bootstrap a VM with some Chef injected and ran, in vCenter you can, with a few simple commands like the following:

~/terraform/testing $ terraform plan # makes sure that the plan works and creates the initial plan.  ~/terraform/testing $ terraform init # if the previous fails due to missing plugins or settings, this command pulls down vsphere, for instance  ~/terraform/testing $ terraform apply # run the terraform plans after you've set everything up this should be the only command you ever need to run to build this.  ~/terraform/testing $ terraform destroy # will remove the running VM, but _not_ remove it from the Chef server. Take note of this, the recreate client option will recreate it

Hopefully, you've learned something here or at least seen the power of leveraging Terraform for something like this. I'm working on Windows and multiple node examples now too, so stay tuned for a similar blog post of those examples. I'll build off what I have here and comment any major changes to get those more advanced examples to work.

The post Terraform with vCenter and Chef appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

DevOps and Monitoring



----
DevOps and Monitoring
// Chef Blog

Traditional monitoring includes monitoring low-level items like CPU, memory, and disk utilization. It is still important to understand and have data for these things as they will help with capacity planning and may be helpful data points when responding to incidents and outages.

DevOps is a cultural and professional movement focused on how to build and operate high-velocity organizations.

Practicing DevOps means understanding who the customers of a service are and their needs. A DevOps approach to monitoring may start by answering the question, "is it up?" Starting there helps encourage discussions and discovering what "up" means. Those discussions happen across many parts of the organization to ensure common understanding. It may mean that your customers can pay you money, that they can stream video from your site, or that customers can reserve seats on a flight or in a venue. 

Customers' experience of a single interface is likely provided by a number of backend services working in concert to help the customer complete the task at hand. Understanding "up" means understanding how all of these services work together and which parts are essential.

It's nearly impossible to talk about monitoring without also discussing alerting. Typically alerts are sent to people when monitors pick-up anomalies. Sometimes these alerts are actionable but, too often, they end up just being noise. For example, there may be a spike in CPU load caused by some batch processing that is not actually having an impact on the customer experience and will end after the batch process has completed. Given that scenario, it is not appropriate to send an alert potentially waking someone at 3AM. Yet, this is often what happens. Practicing DevOps means that we put our people first and waking them at 3AM to tell them about something that is not important and requires no immediate action is inhumane.

I recently had the pleasure of joining Leon Adato (@leonadato), Clinton Wolfe (@clintoncwolfe), and Michael Coté (@cote) at THWACKcamp, an online conference hosted by Solarwinds to discuss these ideas about monitoring an more. We were all part of a panel discussion titled 'When DevOps Says "Monitor".'  A recording of the panel, a transcript, and other resources are all freely available now over on the THWACKcamp site. Check out the recording and let us know what you think.

How are you reinventing your organization's approach to monitoring and alerting?

The post DevOps and Monitoring appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Learning Chef: online and in-person



----
Learning Chef: online and in-person
// Chef Blog

As you may have seen in this blog post, we now have over 10,000 members of the Chef community signed up for an account on Learn Chef Rally. The Chef training team is busy developing the learning content you need to succeed with Chef and grow your career in DevOps. Here are a few examples of content we've created for you:

We are committed to continuous improvement for training content and the overall Chef learning experience. Here are the highlights for the last month.

Learn Chef Rally

In October 2017 we added two new tracks to Learn Chef Rally that focus on using Chef with AWS and Azure. There is also a new Habitat track to help you get started with application automation. Log in or sign up for a user account today and start earning badges for these new tracks:

Keeping our content fresh

Our goal is to have Learn Chef Rally content current and relevant. Many of our modules run through an automated testing process. This helps ensure that each step in the module works as expected and that the code and sample output match the latest versions of product releases.

Improving our site

We're making the site better every day. For example:

  • You can learn about new and featured content through our Updates page.
  • We remember the login method you last used so you can more quickly pick up where you left off.
  • The revised profile page makes it easier to track your progress and see all your great achievements.

Training

New Course

We're excited to introduce a new course: DevOps Foundations – Windows & Linux. Registration is now open for the next offering of this class which will be held online at the end of November.

This course is designed to accelerate you from zero to hero into the world of DevOps. Through this three-day command-line driven adventure, you will develop a solid foundation for managing dozens, hundreds or even thousands of servers by using Chef. And because so many companies have both Windows and Linux servers in their environments, we teach you the technical intricacies of managing and integrating both platforms.

Course Update

Course revisions to Chef Automate Compliance brings us up to date with the latest version of Chef Automate (v.1.7.x) and focuses on using the Chef Automate UI for scanning instead of the standalone Chef Compliance server. This course continues to cover InSpec, the Audit cookbook, remediating issues/re-scanning nodes (Detect and Correct), and creating custom compliance profiles.

Why Take Training?

With all that great Learn Chef Rally content, you might be asking yourself "what more would I learn in a training class?" I highly recommend Learn Chef Rally for all who are learning Chef, but the opportunity to learn in a guided classroom (in person or online) experience should definitely be considered. Your guide for this experience will be one of our highly qualified trainers. Several of them work for our partner, TechnoTrainer, and I will conclude this month's post highlighting their success delivering training for Chef.

Here are a few testimonials from recent students:

"I was sad to have the training end! The personalized attention from the instructor was priceless. Chef made sense and I LOVED it! Every question and problem was solved and explained. Truly craft masters of Chef, take this and be ready to be amazed. Thank you CHEF!"

"Really great training with an extremely experienced instructor. It's so nice to get practical real-world answers to questions instead of a blank stare. Having an engineer conduct the training makes all the difference, even for an beginner/essentials type class."

"Robin was fantastic! He is very knowledgeable, energetic, clearly interested in what he was teaching. He cared that every student was gaining benefit from the class. He was accommodating to all attendees especially during labs or if we had any issues or requests. I would definitely take more Chef classes or other technical classes if I knew Robin Beck was teaching them. Having worked in IT training for many years, I am not easy to impress and Robin did just that – thank you!"

Online instructor-led learning

Although we offer some in person classes, most of our classes are now offered online. My team and our training partners are committed to making the online learning experience awesome and the feedback confirms this.

"One of the best VILT [Virtual Instructor Led Training] I've ever taken, very well structured and our instructor was excellent, kept us engaged at all times while being patient and very very knowledgeable."

"Best technical training I have ever participated in…highly recommend if you want to unlock the power of some hidden-but-amazing features of Chef!"

Register for upcoming classes

When you're ready to build on the skills you've learned from the self-paced modules on Learn Chef Rally, instructor-led training is your next step. The personalized attention you'll receive from the instructor will help you for prepare for real-world challenges of implementing continuous automation in your organization. And with a great trainer delivering the class you will definitely be delighted. Chef training is not only an investment in current job success but also in future career opportunities.

So don't hesitate, take a look at the list of upcoming classes, sign up and "be amazed" at the quality and value of Chef training.

The post Learning Chef: online and in-person appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Habitat on Microsoft Azure OpenDev



----
Habitat on Microsoft Azure OpenDev
// Chef Blog

Microsoft continues to embrace open source. "We've been on this journey for the last few years now," says Rohan Kumar, general manager, Database Systems, Microsoft. "It's really a company about choice right now. We really want to meet customers where they are."

On October 25, 2017, Microsoft hosted Azure OpenDev, with six speakers representing multiple open source projects. We are delighted that our very own Matt Wrock was chosen to present on Habitat, Chef's newest open source project for application automation.

In Matt's presentation, "Modernize your Java development workflow with Habitat" he showed how Habitat can build and create an immutable package containing a Java application and run that package in a variety of environments. After creating a distributed Java web application in Habitat, he showed how the application runs in a local Habitat supervisor, and how to perform a rolling update across nodes in Azure. Matt demonstrated the same application running in a container on an Azure deployed Docker host.

For a technical walk through of Matt's presentation, check out his post on the Habitat developer blog. You can also view the full Azure OpenDev day of presentations. (hint: Matt's session starts at 1:22:20)

Want to know more about Habitat? Get started with our hands-on demo. With our sample node.js application, you'll learn how to setup automated builds, auto-publish containers to Docker Hub, and trigger new builds and containers when you commit new code.

The post Habitat on Microsoft Azure OpenDev appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Coming soon: Chef Client 14 and Chef Development Kit 3



----
Coming soon: Chef Client 14 and Chef Development Kit 3
// Chef Blog

At Chef, we understand that Infrastructure Automation enables your business to succeed, but it isn't the reason your business exists. Ultimately, all software is a careful balancing act of introducing new features, fixing bugs, and saying no to ideas that we might love to do but are too disruptive–and it's up to us to ensure that our work doesn't create too much work for you. To ease that burden and help you plan for updates, we've introduced a regular release cadence, with backwards compatible releases of Chef Client each month, and a yearly "major" release that may remove old, confusing or dangerous functionality.

With that in mind, we're very excited to announce that we'll be releasing Chef Client 14 and Chef Development Kit 3 in April 2018. Chef Client 14 continues the themes that Chef Client 13 introduced, concentrating on ensuring the safety, correctness, and performance of your cookbook code, while introducing many new resources and improved functionality. The ChefDK 3 release will feature the latest and greatest versions of all your favorite Chef developer tools. We will cover these in-depth in a future blog post.

What to expect in Chef Client 14

We're still laser focused on making writing Custom Resources enjoyable and fast. With that in mind we're intending to do considerable work on ChefSpec to make it much easier to test custom resources, we're going to work on our logging so cookbook authors and operators can get exactly the information they need to reason about their Chef Client runs, and we're going to make it easier to integrate with MySQL and PostgreSQL out of the box.

We will be removing some older behaviors that we found to cause errors in cookbooks. To see a list of these removals, our Deprecations page is continually updated, and Foodcritic will already issue warnings for the changes we're introducing in Chef Client 14.

We're also planning on adding a number of frequently used resources. When we think about adding a resource, we examine how stable the resource is (and how frequently the thing it controls changes), how often it's used, and whether it's supportable. Including new resources allows you to do more things out of the box, speeding up the development experience and making it easier to choose the best approach to a problem. Some of the resources we're thinking about adding include:

  • docker_container, docker_registry, docker_exec, and docker_network
  • ohai_hint and chef_handler, for easier cookbook development
  • dmg_package, homebrew_cask, and homebrew_tap for macOS support
  • windows_font, powershell_module, and a few other Windows resources

Chef Client 12 and ChefDK 1 EOL

April 2018 also brings the End of Life of Chef Client 12, nearly four years after its release in December 2014. We understand that for many of you, Chef Client 12 continues to "just work". However behind the curtains, it's becoming technologically outdated and less secure. Much of the software that is contained in the Chef Client 12 packages has also become end of life, meaning that we can no longer offer security releases should a vulnerability in (for example) Ruby be found. Continuing to support Chef Client 12 also makes life harder for our community of cookbook authors, requiring them to support a broad set of capabilities. Along with Chef Client 12, ChefDK 1 will also become End of Life and will cease to be updated or supported. Today, we are releasing a new online resource with an overview of the EOL process and resources to ease the transition to Chef 13/14.

Chef 14 discussion and Chef 12 migration

We expect that Chef Client 14 will be the easiest to use and most featureful release we've made, and we can't wait to get it in your hands. Over the next six months watch this space for additional resources to get the most out of Chef 14/CDK 3, and to ease your transition from Chef 12. Finally, we'd also love your feedback on our plans, and especially the set of resources we're planning on shipping. We have a new official Slack channel for Chef 14 discussion and help with Chef 12 client migration. Join the #Chef-14 channel in our Chef Community Slack at community-slack.chef.io. This channel is actively monitored by Chef employees and community members who look forward to answering your questions and helping make your migration a success.

The post Coming soon: Chef Client 14 and Chef Development Kit 3 appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Everyday Compliance with InSpec



----
Everyday Compliance with InSpec
// Chef Blog

As National Cyber Security Awareness Month comes to a close, it's a great opportunity for all of us to make security and compliance part of our daily routine. I know, I know…no one thinks about "compliance" and gets excited. However, taking advantage of tools like InSpec can help us conquer everyday compliance with ease.

Understanding The Terminology

One of my favorite features of InSpec is its language; it's human-readable so non-technical individuals can participate in the creation of profiles, but it's also executable and powerful enough to allow for robust compliance automation. Let's define some commonly-used compliance terms and how they relate to InSpec.

  • control: a business practice, policy, or procedure used to minimize risk. InSpec provides a language to codify your controls in a way that technical and non-technical people can understand.
  • test: A set of checks and validations whose outcome is used to determine the status of a control. InSpec executes the tests for each control to determine if the control is being satisfied correctly.
  • profile: A group of controls and tests. When InSpec scans a host for its compliance status, it executes a profile.
  • audit: An examination, usually be a third party, that determines the current compliance status against a given set of controls. The reports generated by InSpec and Chef Automate can be entered as evidence for an audit.

By codifying controls and tests inside of an InSpec profile, the steps necessary to "be compliant" can be mutually understood by both the auditors and the auditees.

Compliance Isn't Just About Regulatory Requirements

Perhaps your organization has a rule about password age and length. Ensuring that all user accounts adhere to that rule is compliance! InSpec is a great way to document these rules and also ensure that they are followed. Because InSpec can execute multiple profiles, it's easy to place your company's rules in one profile and controls for a particular regulatory requirement in another profile.

All too often people think of "compliance" as something that must be adhered to because of a government regulation. Your organization's rules are just as important to codify and assess regularly.

Don't Wait for the Audit

Once a profile has been created, it's time to put it to use. Using InSpec to scan a production fleet during audit time is a logical choice and will certainly help reduce the amount of time spent on audit tasks. However, InSpec is easy to integrate into pre-production environments, as well.

Profiles can be stored in a variety of locations, including Chef Automate's built-in profile store, making it easy to share profiles with others. Using a tool like Test Kitchen with the kitchen-inspec plugin, developers can test their applications and systems against the very same profiles used to scan production before the code even leaves their workstation.

Test Kitchen and InSpec also operate wonderfully in a delivery pipeline. Embracing a mindset of "nothing ships to production unless it passes compliance" will help ensure that once your compliance tests are green in production, they stay green.

Your Auditor is Your Partner

Raise your hand if you look forward to when your auditor visits. Hmmm, no hands raised… just as I thought.

It's time for us to appreciate the auditor. Besides helping with all the necessary paperwork and processes to officially complete an audit, they provide something even more valuable: the experience of routinely performing these audits and deciphering the requirements. Many government compliance documents are opaque, making it difficult to understand how to properly satisfy the requirements. Enlist your auditor's assistance and gain a shared understanding of each control. Once you have that, it becomes considerably easier to automate the audit.

Everyone is Responsible for Compliance and Security

Guess what? Even if you're not the company's Chief Compliance Officer or a member of your organization's security department, security and compliance are YOUR responsibility. In fact, it's everyone's responsibility! Application developers have an obligation to build applications that protect sensitive information and don't erode an organization's compliance posture. Team members ranging from systems engineers to data center operators have to participate in periodic audits. In between audits, they follow and refine procedures to make the next audit a bit easier.

And if you're in sales or marketing, you're not off the hook either! As a representative of your company and your organization, you are another critical set of eyes on the lookout for situations that aren't quite right which may be indicative of a more serious issue brewing.

We all have our part.

Let's Get Started

Audits can be scary. Compliance can be annoying. However, they serve a critical purpose to keeping your company, your data, and your customers safe. InSpec and Chef Automate can help lighten the audit load and allow you to embrace a culture of "compliance first" without reducing the flexibility needed to delight your customers.

There are a number of ready-to-run profiles available on the Chef Supermarket that you can try right now. Download and install the ChefDK, and then run inspec supermarket exec dev-sec/linux-baseline --target ssh://username@1.2.3.4 -i /path/to/ssh/key --sudo and experience how easy it is to use InSpec.

For a guided hands-on experience with InSpec, try the Compliance Automation track on Learn Chef Rally. You'll learn the basics of InSpec, how to use community compliance profiles, and more.

The post Everyday Compliance with InSpec appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Chef on AWS and Azure – New Tracks on Learn Chef Rally



----
Chef on AWS and Azure – New Tracks on Learn Chef Rally
// Chef Blog

It takes a community of passionate and involved partners to fuel the love of Chef, DevOps, and automation. Our partnerships with Microsoft Azure and Amazon Web Services have made for an exciting experience using Chef Automate. You can learn more about our partner integrations on the Chef web pages for Azure and AWS.

For a hands-on experience, check out these Learn Chef Rally tracks to get up and running with Chef on Amazon Web Services  or Chef on Microsoft Azure and earn these badges.

Both of these tracks provide a streamlined view of Chef Automate focused entirely on your favorite cloud environment. Whether you're considering migrating to the cloud or are looking to further automate your existing cloud deployment, you'll see how Chef helps gain insight into what's happening on your infrastructure and how to speed up development.

What you'll learn

Within each of these tracks you'll find the following modules:

Learn the Chef basics

Get the hang of how Chef works by configuring a cloud instance directly. You'll set up a web server and serve a basic home page to get a feel for how cookbooks, recipes, and resources work. You'll also see how Chef's test and repair model ensures that configuration changes are made only when needed.

Manage a node with Chef Automate

Get hands on with Chef Automate and see how to gain insight into what's happening on your infrastructure. You'll bootstrap your first node and learn how to run Chef automatically to keep your systems up to date.

Get started with Test Kitchen

Learn how to speed up the development process by using Test Kitchen to test out your infrastructure code on temporary cloud instances before you make any changes to your infrastructure.

Coming Soon

In the next few days we'll be releasing a fourth module where you'll use Chef Automate to push configuration changes to a production-like environment. We'll also continue to work with our partners to bring you fresh content and training modules to help you build your Chef knowledge and skills. Have a suggestion for future modules? Contact us at training@chef.io.

Get Started

Start learning and earn those badges now!

The post Chef on AWS and Azure – New Tracks on Learn Chef Rally appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Guest Blog: Unnatural DevOps Delivers Super-Natural Results



----
Guest Blog: Unnatural DevOps Delivers Super-Natural Results
// Chef Blog

Automation is a big component in providing IT as a service. Trace3, a Chef partner since 2013, empowers organizations to keep pace with the rapidly changing IT landscape by leveraging innovative technologies, like Chef, allowing companies to leverage Continuous Automation and ensure a competitive edge in today's marketplace.

DevOps is not necessarily a "natural" process. Asking people to function outside of their comfort zones and prescribed roles is a challenging task for any organization, and requires a shift in organizational behavior. Here at Trace3 we believe that while DevOps is not natural, a supernatural result is achievable with the right cultural transformation supported by good tools that drive collaboration and automated flow between people and tools in the application delivery workflow.

Implementing DevOps

To quote from Michael Schmidt's article, Ten ways DevOps will benefit your IT department, "When implemented properly, DevOps can transform your software engineering process and create value for both employees and customers, producing business performance and value." These are huge "supernatural" benefits that every business would like to achieve.

Implementing DevOps properly can be challenging. DevOps depends on collaboration, smooth process flow from conception through deployment, and feedback between people, processes and technologies. It's more natural for people and departments to hoard information and centralize control, traits which are contrary to DevOps best practices. People tend to trust their own abilities and are skeptical when they need to trust dependencies on others. In some organizations, people view competition with others as a means to personal advancement. Compounding this, organizations often manage, measure and reward success at the silo level, instead of recognizing a successful delivery as a result of successful execution across the end-to-end DevOps value stream.

Breaking down silos

Breaking down silos through transparency and the free exchange of information helps organizations implement the unnatural DevOps processes needed to achieve supernatural results. Fortunately, the transformation is assisted with technology. For example, Continuous Automation tools like Chef Automate liberate sharing of infrastructure, application and compliance information. Pauly Comtois, Hearst's Vice President of DevOps, describes the role of strong tooling in successful DevOps transformations in his article Enterprise DevOps at Hearst Digital Media.

"Tools can be an incredible asset in this regard. We deploy Chef across multiple teams in the value stream, automating the deployment, configuration and state of environments in Dev, QA and Production."

In scenarios such as the Hearst example above, tools support integrated and automated processes for builds, configuration, infrastructure and deployment management. Information managed by Chef Automate is available for DevOps uses by providing a single "public yet controlled" source of truth for configuration data useful by all processes and tools in the toolchain across the end-to-end pipeline. People that were concerned and hoarding configuration data can trust that the tools will keep the information secure, reliable and available on demand as readily accessible code.

Tools to support a DevOps culture

What's more, Trace3 believes Continuous Automation platforms such as Chef Automate can enable powerful scenarios such as Compliance as Code. This combination of cultural and tooling changes allow us to partner with our customers to successfully break down silos and achieve positive DevOps results in organizations. Moving beyond the natural behavior of organizations and individuals is difficult, but success can create supernatural results for long term growth and team effectiveness.

Learn More

The following tracks on Learn Chef Rally will help you learn the skills you need to get started with DevOps and Continuous Automation with Chef Automate.

DevOps Transformation

Digest the cultural and technological changes that need to happen to mix DevOps principles into your organization. Begin your own DevOps journey through videos, case studies, and exercises to evaluate your progress. 

Try Chef Automate

Get Chef Automate up and running on your desktop in just minutes. Scan a few systems for compliance and see whether they adhere to the recommended guidelines.

The post Guest Blog: Unnatural DevOps Delivers Super-Natural Results appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

“Habitatize Yourself” a Ruby Web Application in Habitat



----
"Habitatize Yourself" a Ruby Web Application in Habitat
// Chef Blog

Editor Note: "Habitatize Yourself" a Ruby Web Application is a YouTube series hosted by Franklin Webber. This series takes you from idea to Habitat-packaged Ruby application in a lunch break. Watch, learn and code along with him and see how Habitat makes building, deploying, and managing modern applications delightful.


I made two attempts at learning Habitat over the past year. I watched all the available videos. Built and then rebuilt the sample applications in the tutorials. And when I was all done, I started to ask: what do I do now? This spurred me to create a new video series sharing my learning experiences to help accelerate your Habitat knowledge.

Kelsey Hightower's Keynote at ChefConf 2017 inspired me. Particular this description on how he learns new things:

"I take something that I know and pair it up with something that I don't know." ~ Kelsey Hightower

Watching him playfully demonstrate Habitat working with Kubernetes brought me back to another requirement: for me to learn successfully, I need for my work to bring delight; even in tiny ways. His talk reminded me that learning is also play. Which brought me to creating animated GIFs in Ruby.

Ruby Web Application in Habitat

Within a few hours I had a working script generating images filled with a friend's face. By late afternoon, the functioning web service was churning out animated images for all my co-workers. The next morning, with some help from the amazing Habitat Community and documentation, I had my web application successfully packaged with Habitat.

After a quick celebration, I went back to break down the application into a series of small exercises. I built the entire Ruby application and web service from scratch. Only interested in Habitat, skip ahead and focus on the exercises that:

If this resource helps make learning Habitat more approachable and delightful then I feel my work has made an impact. Through this entire process I learned a lot about Habitat and I hope you will as well.

Learn More

Try these Habitat tutorials on Learn Chef Rally:

  • Demos and Quickstarts: Try Habitat
    • Explore the ease of packaging, deploying, and running your applications with Habitat.
  • Building Applications with Habitat
    • Combine a few basic ingredients and you'll have the recipe for building modern and legacy applications that run anywhere. See how plans help you build software consistently and how the Supervisor starts, monitors, and acts upon changes made to your services.
  • Deploying Applications with Habitat
    • Ready to serve up your app to your users? Use Habitat Builder to neatly package your app and anything it needs to run. Deploy your app to the cloud or anywhere else and keep it updated in real time.

The post "Habitatize Yourself" a Ruby Web Application in Habitat appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Friday, September 29, 2017

MCPc's Business Tech 2017 - 13 days left to register - 10/12/17

MCPc's Business Tech 2017

Only 13 days left to register. If your in NE Ohio and the surrounding area, you should attend. Oh yeah and it's free.

Our Plenary Sessions

  • 12–1:30pm

    Luncheon Presentation: Tom Ridge turns from Homeland Security to Corporate Cyber Attack

    Guidance from America's First Homeland Security Chief
  • 3–4pm

    Women & Technology

    Women IT Leaders on Industry Challenges and Career Milestones
    To inspire the next generation of leadership in technology, the audience will include girls and young women from area schools and colleges.
  • 4–5pm

    Taking IT to the Next Level in Ohio - Iot, Big Data, Security & Innovation

    Executives from BioEnterprise, JobsOhio, CWRU, Microsoft, CCF Innovations, and Deloitte Share Their Insight         

Monday, September 18, 2017

Backing up configs with the Ansible NCLU module



----
Backing up configs with the Ansible NCLU module
// Cumulus Networks Blog

With the release of Ansible 2.3 the Cumulus Linux NCLU module is now part of Ansible core. This means when you `apt-get install ansible`, you get the NCLU module pre-installed! This blog post will focus on using the NCLU module to backup and restore configs on Cumulus Linux. To read more about the NCLU module from its creator, Barry Peddycord, click here.

The consulting team uses Ansible very frequently when helping customers fully automate their data centers. A lot of our playbooks use the Ansible template module because it is very efficient and idempotent, and Cumulus Linux has been built with very robust reload capabilities for both networking and Quagga/FRR. This reload capability allows the box to perform a diff on either `etc/network/interfaces` or `etc/quagga/Quagga.conf` so when a flat-file is overridden with the template module, only the "diff" (or difference) is applied. This means if swp1-10 were already working and we added configuration for swp11-20, an ifreload will only add the additional config and be non-disruptive for swp1-10. This reload capability is imperative to data centers and our customers couldn't live without it.

However, many customers also want to build configs with NCLU (or the net commands) when they are first introduced to Cumulus Linux. Instead of starting from a flat-file and templating it out, they are going command by command as they learn Cumulus Linux. It is still fairly easy to build a configuration with NCLU, then pull the rendered configuration with the `net show config files` command and build a template out of it.

That being said, I wanted to provide an alternate method of backing up and restoring configs in a very simple playbook that does not require templating or configuration of flat-files. Check out the following Github repo: https://github.com/seanx820/ansible_nclu

NCLU module

There is a README.MD on the Github page but I will explain here in brevity what I am trying to accomplish. The pull_nclu.yml file will grab all the net commands from the Cumulus Linux switch(es) and store it locally on the server that ran the playbook. It literally just connects to the Cumulus Linux switch(es) and grabs a `net show config commands` and stores it locally. The push_nclu.yml playbook will then re-push these net commands to the Cumulus LInux switch(es) line per line, but in a idempotent way. This means if the configuration that is being applied is already configured it will skip it let you know it is already configured.

There are advantages and things to consider vs templating in my opinion and I will quickly go over some:

Advantages of the NCLU module backup method:

  • The NCLU module is literally typing what a user would type to configure the box. This means it is very easy for someone to figure out what the playbook is doing, and troubleshoot any issues.
  • The playbooks provided are so simple only a rudimentary understanding of how Ansible works is required
  • The NCLU module is still idempotent, so we are not just firing commands off, it knows if something is already configured.
  • No knowledge of Jinja2 or other templating mechanism needed. These playbooks are simply replaying commands in a smart and logical matter.

Things to consider with the NCLU module backup method:

  • Speed! This method literally replays each net command line by line back to the box. This will always be inherently slower than just overwriting a couple flat-files and performing an ifreload and a systemctl reload quagga.service. Configuring Cumulus Linux just with templates can require just four Ansible tasks, vs NCLU which can literally be hundreds of lines of net commands. Does it matter? Maybe? Depends on the situation and how you are using Ansible.
  • Config Management: While the NCLU module is idempotent, unless you perform a `net del all` and reset the config, you don't know if another user or program has configured the box in addition to you. Meaning if your config never configured swp10 but someone else did, this method would have no concept of swp10 being configured. All the commands you configured will be restored correctly but their is no concept of an end-state for the box holistically. With templates we know the entire config, not just the part we configured (because we are literally overwriting the entire configuration every time and doing a diff).

There you have it! I always like to think adoption of network automation by network engineers in 3 stages: crawl, walk and run. I see this method somewhere under the crawl and walk stages. I find myself using it during network POCs (proof of concept) labs and teaching Ansible to first-timers. Let me know what you think in the comments below.

The post Backing up configs with the Ansible NCLU module appeared first on Cumulus Networks Blog.


----

Read in my feedly


Sent from my iPhone