Friday, February 27, 2015

#Docker, #Kubernetes, #CoreOS and Big Data in Apache #CloudStack by Sebastien Goasguen

Thursday, February 26, 2015

When Zero Days Become Weeks or Months [feedly]

When Zero Days Become Weeks or Months
// A Collection of Bromides on Infrastructure

As February comes to a close we have already seen critical patches from Adobe and Microsoft. Even more concerning, Microsoft has not yet patched a recently disclosed Internet Explorer zero-day. For better or worse, Google's "Project Zero" is putting the pressure on vendors like Microsoft to patch reported vulnerabilities in 90 days before public disclosure, which has been a source of public friction between the companies. With this forced public disclosure, there is now a risk of zero-day attacks stretching into weeks or months.


It is easy to feel sympathy for vendors like Adobe and Microsoft, who serve as a public face for the challenges of patching zero-day vulnerabilities. These organizations work ceaselessly and thanklessly to fix, test and deploy patches for vulnerable applications on an increasingly shortened timeframe. Still it seems that as soon as one vulnerability is patched, another one is reported like a Sisyphean game of Whack-a-Mole.

Of course, it does not feel like a game for information security teams. The stakes are quite real, but present a complicated dilemma. Even the most Draconian IT teams could not suggest prohibiting the use of these vulnerable applications that are the cornerstone of our modern productivity, yet even the most obtuse information security professional realizes the risk they present.

When critical security vulnerabilities exist in nearly every common application, from document readers to Web browsers, then it should come as no surprise that the frequency of cyber attacks seems to be increasing.

Beyond unpatched vulnerabilities, security patches present their own set of problems. It is dangerous to patch without testing because deploying a patch that breaks your systems could do more damage than if you are attacked. Security patches have never existed in a vacuum.

Where does this leave organizations? There is a risk if you don't have the patch, but there is also a risk that you deploy a patch that you didn't test. Google's "Project Zero" is pushing vendors to create patches, but could this pressure create more risk? How can we guarantee Microsoft can fix a bug in 90 days without introducing a new bug or breaking the software?

These conflicting challenges represent a real opportunity for Bromium to take the pressure off of IT teams and software vendors. By leveraging micro-virtualization to isolate vulnerable software, organizations can remain protect while they take the time to test critical patches before they deploy.


Shared via my feedly reader

Sent from my iPhone

The Cloudcast - ByteSized - Monitoring & Logging [feedly]

The Cloudcast - ByteSized - Monitoring & Logging
// The Cloudcast (.NET)

Brian and Jonas Rosland (@virtualswede) discuss Logging and Monitoring systems and how they have evolved with open source projects and SaaS services. Music Credit: Nine Inch Nails (

Shared via my feedly reader

Sent from my iPhone

The Cloudcast - ByteSized - Stateful vs. Stateless Apps [feedly]

The Cloudcast - ByteSized - Stateful vs. Stateless Apps
// The Cloudcast (.NET)

Brian and Jonas Rosland (@virtualswede) talk about the key architectural and deployment differences between Stateful and Stateless Applications. Music Credit: Nine Inch Nails (

Shared via my feedly reader

Sent from my iPhone

Code Black Drone Crash Pack for $19 [feedly]

Code Black Drone Crash Pack for $19
// StackSocial

Prepare for Crash Landings with Extra Rotors, Motors, LED Lights, Rubber Feet & A Crash Bumper
Expires February 24, 2016 23:59 PST
Buy now and get 47% off


Get set for crash recovery with this custom-curated crash pack made exclusively for your Code Black Drone. Don't sweat the crashes - just enjoy the flight.
  • Replace lost or damaged rotors with 16 spare blades
  • Swap out your damaged motor, twice
  • Refresh you drone's armor with an additional blade protector
  • Prepare for night flights w/ replacement LEDs
  • Recoup from crash landing w/ rubber feet


  • Parts compatible with any Code Black Drone, found here.


  • 2 motors
  • 16 rotor blades
  • 1 blade-protecting bumper
  • 2 LED lights
  • 4 rubber feet


  • Free shipping
  • Ships to: Continental US & Hawaii only
  • Ship leading time: 1-2 weeks


Shared via my feedly reader

Sent from my iPhone

Stone River Academy: Lifetime Subscription for $75 [feedly]

Stone River Academy: Lifetime Subscription for $75
// StackSocial

90+ Courses & Counting on All Things Tech: Coding, Design, 3D-Animation & More
Expires March 28, 2015 23:59 PST
Buy now and get 94% off


Access over 2,000 hours of online learning, and master everything from iOS mobile development to graphic design with this lifetime subscription. With 2 to 5 courses added monthly, you are guaranteed to stay on top of the technology learning curve.
  • Lifetime access to all current & newly added content consisting of over 90 courses, 2000+ hours - no future purchase necessary!
  • All level courses available on web & mobile programming, web design, game app creation, 3D-animation & more
  • Instruction for using Bootstrap, Unity 3D, Java, Python, MySQL, node.js, CSS & more
  • Skills for advancing your career or learning a hobby
To view all courses currently offered, click here.


  • Internet browser required (mobile or desktop)


  • Restrictions: access your account on only 1 device at a time
  • Redeem account within 60 days of purchase
  • Updates (course revisions & new courses) included


Stone River Academy is passionate about teaching people useful topics in interesting ways. From technology, to business, to education, they deliver high-quality courses that take you from beginner to expert in a matter of hours.


Shared via my feedly reader

Sent from my iPhone

CloudStack European User Group Roundup – February 2015 [feedly]

CloudStack European User Group Roundup – February 2015
// CloudStack Consultancy & CloudStack...

Once again Trend Micro played host to the CloudStack European user group. And this quarter with Giles Sirett 'working' at CloudStack Day – Brazil, Geoff Higginbottom CTO of ShapeBlue took to the stage to compare the event.

The News

As is customary at the user group, we kicked off with a roundup of news in the CloudStack world since the last meeting. This included a brief look at some of the statistics which had been presented at the CloudStack collaboration conference in Budapest with also information on the upcoming CloudStack days to be held this year.

Our first speaker was Geoff Higginbottom of ShapeBlue. Geoff took the audience through the changes to XenServer's high availability configuration requirements, and the changes to how CloudStack environments are managed required to support the new XenServer requirements.

 [slideshare id=45131436&doc=xenserverhaimprovements-150225102521-conversion-gate01]

Our second speaker was Richard Chart from ScienceLogic. ScienceLogic deliver a hybrid IT monitoring platform enabling organisations to gain holistic end to end visibility across their on-premise and off-premise resources.  Richard focused on the CloudStack plug-in (or Action Pack) for their EM7 product.

[slideshare id=45131322&doc=sciencelogiccloudstacklondonmeetup2015-02-11-150225102310-conversion-gate02]

Next up was Wilder Rodrigues from Schuberg Philis. Wilder is part of the team at Schuberg Philis who have been re-factoring the code for the VPC virtual router.  Wilder took the delegates through the work they have done to clean up the existing code and offer new functionality such as the ability to deploy redundant virtual private cloud routers.

[slideshare id=45131438&doc=demovpcacseugroup-150225102523-conversion-gate01]

The next speaker was Paul Angus from ShapeBlue.  Paul highlighted the major new features and improvements in soon-to-be-released version 4.5 of CloudStack.  Paul then picked out a number of the features to give additional background to them.

 [slideshare id=45131427&doc=whatsnewacs45-150225102513-conversion-gate01]

Last but no means least was Daan Hoogland of Schuberg Philis. Daan explained to the audience the work that is being done by the community to improve the quality of the code produced by the project. Daan finished with a call to arms for contributors to follow the guidelines being developed in order to make it easier for code reviewers to check code and understand its purpose thereby improving the overall standard of code in the project.

 [slideshare id=45131439&doc=london-meetup-feb-2015-150225102523-conversion-gate02]

Thank you to all of the speakers and all of the attendees

The date for the next user group has been provisionally set as 14th May 2015 – We hope to see you there.


Shared via my feedly reader

Sent from my iPhone

Bento Box Update for CentOS and Fedora [feedly]

Bento Box Update for CentOS and Fedora
// Chef Blog

This is not urgent, but you may encounter SSL verification errors when using vagrant directly, or vagrant through test kitchen.

Special Thanks to Joe Damato of Package Cloud for spending his time debugging this issue with me the other day.

TL;DR, We found a bug in our bento boxes where the SSL certificates for AWS S3 couldn't be verified by openssl and yum on our CentOS 5.11, CentOS 6.6, and Fedora 21 "bento" boxes because the VeriSign certificates were removed by the upstream curl project. Update your local boxes. First remove the boxes with vagrant box remove, then rerun test kitchen or vagrant in your project.

We publish Chef Server 12 packages to a great hosted package repository provider, Package Cloud. They provide secure, properly configured yum and apt repositories with SSL, GPG, and all the encrypted bits you can eat. In testing the chef-server cookbook for consuming packages from Package Cloud, I discovered a problem with our bento-built base boxes for CentOS 5.11, and 6.6.

[2015-02-25T19:54:49+00:00] ERROR: chef_server_ingredient[chef-server-core] (chef-server::default line 18) had an error: Mixlib::ShellOut::ShellCommandFailed: packagecloud_repo[chef/stable/] (/tmp/kitchen/cache/cookbooks/chef-server-ingredient/libraries/chef_server_ingredients_provider.rb line 44) had an error: Mixlib::ShellOut::ShellCommandFailed: execute[yum-makecache-chef_stable_] (/tmp/kitchen/cache/cookbooks/packagecloud/providers/repo.rb line 109) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], but received '1'  ---- Begin output of yum -q makecache -y --disablerepo=* --enablerepo=chef_stable_ ----  ...SNIP    File "/usr/lib64/python2.4/", line 565, in http_error_302  ...SNIP    File "/usr/lib64/python2.4/site-packages/M2Crypto/SSL/", line 167, in connect_ssl      return m2.ssl_connect(self.ssl, self._timeout)  M2Crypto.SSL.SSLError: certificate verify failed  

What's going on here?

We're attempting to add the Package Cloud repository configuration and rebuild the yum cache for it. Here is the yum configuration:

[chef_stable_]  name=chef_stable_  baseurl=$basearch  repo_gpgcheck=0  gpgcheck=0  enabled=1  gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-packagecloud_io  sslverify=1  sslcacert=/etc/pki/tls/certs/ca-bundle.crt  

Note that the baseurl is https – most package repositories probably aren't going to run into this because most use http. The thing is, despite Package Cloud having a valid SSL certificate, we're getting a verification failure in the certificate chain. Let's look at this with OpenSSL:

$ openssl s_client -CAfile /etc/pki/tls/certs/ca-bundle.crt -connect  CONNECTED(00000003)  depth=3 /C=SE/O=AddTrust AB/OU=AddTrust External TTP Network/CN=AddTrust External CA Root  verify return:1  depth=2 /C=GB/ST=Greater Manchester/L=Salford/O=COMODO CA Limited/CN=COMODO RSA Certification Authority  verify return:1  depth=1 /C=GB/ST=Greater Manchester/L=Salford/O=COMODO CA Limited/CN=COMODO RSA Domain Validation Secure Server CA  verify return:1  depth=0 /OU=Domain Control Validated/OU=EssentialSSL/  verify return:1  ... SNIP  SSL-Session:      Verify return code: 0 (ok)  

Okay, that looks fine, why is it failing when yum runs? The key is in the python stack trace from yum:

File "/usr/lib64/python2.4/", line 565, in http_error_302  

Package Cloud actually stores the packages in S3, so it redirects to the bucket, Let's check that certificate with openssl:

$ openssl s_client -CAfile /etc/pki/tls/certs/ca-bundle.crt -connect  depth=2 /C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5  verify error:num=20:unable to get local issuer certificate  verify return:0  ---  Certificate chain   0 s:/C=US/ST=Washington/L=Seattle/ Inc./CN=*     i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at (c)10/CN=VeriSign Class 3 Secure Server CA - G3   1 s:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at (c)10/CN=VeriSign Class 3 Secure Server CA - G3     i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5   2 s:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5     i:/C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority  ...SNIP  SSL-Session:      Verify return code: 20 (unable to get local issuer certificate)  

This is getting to the root of the matter as to why yum was failing. Why is it failing though?

As it turns out, the latest CA certificate bundle from the curl project appears to have removed two of the Versign certificates, which are used by AWS for

But wait, why does this matter? Shouldn't CentOS have the ca-bundle.crt file that comes from the openssl package?

$ rpm -qf ca-bundle.crt  openssl-0.9.8e-27.el5_10.4  

Sure enough. What happened?

$ sudo rpm -V openssl  S.5....T  c /etc/pki/tls/certs/ca-bundle.crt  

Wait a second, why is the file different? Well this is where we get back to the TL;DR. In our bento boxes for CentOS, we had a line in the ks.cfg that looked like this:

wget -O/etc/pki/tls/certs/ca-bundle.crt  

I say past tense because we've since removed this from the ks.cfg on the affected platforms and rebuilt the boxes. This issue was particularly perplexing at first because the problem didn't happen on our CentOS 5.10 box. The point in time when that box was built, the cacert.pem bundle had the VeriSign certificates, but they were removed when we retrieved the cacert.pem for 5.11 and 6.6 base boxes.

Why were we retrieving the bundle in the first place? It's hard to say – that wget line has always been in the ks.cfg for the bento repository. At some point in time it might have been to work around invalid certificates being present in the default package from the distribution, or some other problem. The important thing is that the distribution's package has working certificates, and we want to use that.

So what do you need to do? You should remove your opscode-centos vagrant boxes, and re-add them. You can do this:

for i in opscode-centos-5.10 opscode-centos-5.11 opscode-centos-6.6 opscode-centos-7.0 opscode-fedora-20  do  vagrant box remove $i  done  

Then wherever you're using those boxes in your own projects – cookbooks with test-kitchen for example, you can simple rerun test kitchen and it will download the updated boxes.

If you'd like to first check if your base boxes are affected, you can use the test-cacert cookbook. With ChefDK 0.4.0:

% git clone test-cacert  % cd test-cacert  % kitchen test default  


Shared via my feedly reader

Sent from my iPhone

ChefConf Talk Spotlight: A sobering journey from Parse / Facebook [feedly]

ChefConf Talk Spotlight: A sobering journey from Parse / Facebook
// Chef Blog

We're five weeks away from ChefConf 2015 and we're filling seats fast, so, if you haven't already, register today and guarantee your seat at the epicenter of DevOps.

Continuing our series of spotlights on the tremendous talks, workshops, and sponsors at this year's show, today we focus on Awesome Chef Charity Majors – a production engineer at Parse (now part of Facebook) – and her session, "There and back again: how we drank the Chef koolaid, sobered up, and learned to cook responsibly" – a must-see based on the title alone!

Here's the download on Charity's talk:

When we first began using Chef at Parse, we fell in love with it. Chef became our source of truth for everything. Bootstrapping, config files, package management, deploying software, service registration & discovery, db provisioning and backups and restores, cluster management, everything. But at some point we reached Peak Chef and realized our usage model was starting to cause more problems than it was solving for us. We still love the pants off off Chef, but it is not the right tool for every job in every environment. I'll talk about the evolution of Parse's Chef infrastructure, what we've opted to move out of Chef, and some of the tradeoffs involved in using Chef vs other tools.
This will be a great session for all of you looking for guidance on tooling, or even a friendly debate about the subject. It will also provide patterns of success from some seriously smart and active Chefs over at Parse/Facebook.

As for the presenter herself, Charity is happily building out the next generation of mobile platform technology. She likes free software, free speech and single malt scotch.

See you at ChefConf!



Shared via my feedly reader

Sent from my iPhone

Chef Server 12.0.5 Released [feedly]

Chef Server 12.0.5 Released
// Chef Blog

Today we have released Chef Server 12.0.5. This release includes further updates to provide API support for key rotation, policy file updates, and LDAP-related fixes to user update.

You can find installers on our downloads site.

Updating Users

This release fixes Issue 66. Previously, users in LDAP-enabled installations would be unable to log in after resetting their API key or otherwise updating their user record.

This resolves the issue for new installations and currently unaffected user accounts. However, if your installation has users who have already been locked out, please contact Chef Support ( for help repairing their accounts.

This fix has resulted in a minor change in behavior: once a user is placed into recovery mode to bypass LDAP login, they will remain there until explicitly taken out of recovery mode. For more information on how to do that, see this section of chef-server-ctl documentation.

Key Rotation

We're releasing key rotation components as we complete and test them. This week, we've added API POST support, allowing you to create keys for a user or client via the API.

Key Rotation Is Still A Feature In Progress

Until key rotation is feature-complete, we continue to recommend that you manage your keys via the users and clients endpoints as is done traditionally.


Work on Policyfile support continues to evolve at a rapid pace. This update includes new GET and POST support to named cookbook artifact identifiers. Policyfile is disabled by default, but if you want to familiarize yourself with what we're trying to do, this RFC is a good place to start.

Release Notes

As always you can view the release notes for more details, and the change log for even more.


Shared via my feedly reader

Sent from my iPhone

Is #OpenStack the 'Space Pen' to CloudStack's 'Pencil'?

Is #OpenStack the 'Space Pen' to CloudStack's 'Pencil'?

Tuesday, February 24, 2015

Vagrant Azure Provider is now Platform Independent [feedly]

Vagrant Azure Provider is now Platform Independent
// MS Open Tech

vagrant_logoThe Vagrant community has been happily using the Vagrant Azure Provider to deploy Vagrant boxes to Microsoft Azure for a while now. However, in order to provision Windows VMs it was necessary to use a Windows client as we used PowerShell remoting for provisioning the box. We are pleased to report that when using the latest Vagrant Azure provider this is no longer the case. Linux and Mac users can now deploy and provision both Windows and Linux VMs on Microsoft Azure using Vagrant.

This improvement come to us courtesy of some great work in the Ruby 'WinRM' Gem as well as a sizable community contribution to the Vagrant Azure plugin from David Justice. We have also taken the opportunity to merge some other community provided enhancements. These include:

  • Support provisioning of Windows VMs using WinRM
  • Allow execution of PowerShell scripts on Windows VM during provisioning
  • Ensure Chef users can make use of Vagrant Omnibus
  • Support for rsync during provisioning
  • Ability to connect a VM to a virtual network
  • Documentation for the Multi Machine feature

As you would expect, this release also fixes a few edge case bugs..

Getting started with this plugin is straightforward for anyone already familiar with Vagrant (version 1.6.0 or higher). First, you need to install the plugin:

vagrant plugin install vagrant-azure

In order to use the plugin you will need to provide an Azure Vagrant box. You can define your own box or use the provided dummy box by adding it to your Vagrant client as follows:

vagrant box add azure

This box is configured in the config.vm.provider section of your Vagrantfile as follows:

Vagrant.configure('2') do |config| = 'azure'     config.vm.provider :azure do |azure|         azure.mgmt_certificate = 'YOUR AZURE MANAGEMENT CERTIFICATE'         azure.mgmt_endpoint = ''         azure.subscription_id = 'YOUR AZURE SUBSCRIPTION ID'         # Storage account to us. A new account will be created if blank.         azure.storage_acct_name = 'NAME OF YOUR STORAGE ACCOUNT'            # Azure image to use         azure.vm_image = 'NAME OF THE IMAGE TO USE'         # username defaults to 'vagrant' if not provided         azure.vm_user = 'PROVIDE A USERNAME'          # password: min 8 characters. should contain a lower case letter,          # an uppercase letter, a number and a special character         azure.vm_password = 'PROVIDE A VALID PASSWORD'          # max 15 characters. contains letters, number and hyphens.          # Can start with letters and can end with letters and numbers         azure.vm_name = 'PROVIDE A NAME FOR YOUR VIRTUAL MACHINE'             # Cloud service to use, defaults to same as vm_name.          # Leave blank to auto-generate         azure.cloud_service_name = 'PROVIDE A NAME FOR YOUR CLOUD SERVICE'          # Deployment name (used in portal and CLI tools) defaults to cloud_service         azure.deployment_name = 'PROVIDE A NAME FOR YOUR DEPLOYMENT'          # Data centre to use e.g., West US         azure.vm_location = 'PROVIDE A LOCATION FOR VM'          azure.private_key_file = 'PATH TO YOUR KEY FILE'         azure.certificate_file = 'PATH TO YOUR CERTIFICATE FILE'           # Provide the following values if creating a *Nix VM         azure.ssh_port = 'A VALID PUBLIC PORT'           # Provide the following values if creating a Windows VM         # Open up winrm ports on both http (5985) and http (5986) ports         azure.winrm_transport = [ 'http', 'https' ]          # customize the winrm https port, defaults to 5986         azure.winrm_https_port = 'A VALID PUBLIC PORT'         # customize the winrm http port, defaults to 5985          azure.winrm_http_port = 'A VALID PUBLIC PORT'          # opens the Remote Desktop internal port (53389).          # Without this, you cannot RDP to a Windows VM.         azure.tcp_endpoints = '3389:53389'        end         # SSH username and password used when creating the VM       config.ssh.username = 'YOUR USERNAME'        config.ssh.password = 'YOUR PASSWORD'   end


You should, of course, add any provisioning you want using the standard Vagrant provisioning mechanisms. Now you can simply provide the name of the Vagrant Azure provider when running "vagrant up", e.g.:

vagrant up --provider=azure

Having completed these steps you will have a Virtual Machine deployed and configured on Microsoft Azure using Vagrant.


Shared via my feedly reader

Sent from my iPhone

Using Packer with Hyper-V and Microsoft Azure [feedly]

Using Packer with Hyper-V and Microsoft Azure
// MS Open Tech

Packer LogoPacker is a tool for creating identical machine images, that is static units containing pre-configured operating systems and application software. With Microsoft open technologies latest contributions to the project you can now use Packer to create machine images for Azure and Hyper-V using any computer with Hyper-V enabled.

Pre-baked machine images configured by tools such as Chef and Puppet, have many advantages, but they can be difficult to build. Packer closes this gap by providing a single tool to build across multiple environments. Packer is easy to use and automates the creation of any type of machine image. It is an excellent partner for modern configuration management systems, allowing you to use them to install and configure the software within your Packer-made images.

Our contribution provides:

  1. Azure Builder
  2. Hyper-V Builder
  3. azure-custom-script-extension Provisioner
  4. PowerShell Provisioner
  5. azureVmCustomScriptExtension Communicator
  6. PowerShell Communicator

Our code has been complete and usable for a while; in fact we issued a pull request back in November last year. Unfortunately, because our code targets Go 1.3 while Packer currently targets Go 1.2 it has not yet been merged upstream. The good news is that the Packer team have confirmed that the next version of Packer will target either Go 1.3 or 1.4. It is therefore hoped that our code will be included upstream soon.

In the meantime, users can still use Packer to create machine images for Azure or Hyper-V by either checking out our Packer fork or by merging our pull request locally.





Shared via my feedly reader

Sent from my iPhone

Pointing XenServer to a new Open vSwitch Manager (Nicira/NSX) [feedly]

Pointing XenServer to a new Open vSwitch Manager (Nicira/NSX)
// Remi Bergsma's blog

Our XenServer hypervisors use Nicira/NSX for Software Defined Networking (orchestrated by CloudStack). We had to migrate from one controller to another and that could easily be done by changing the Open vSwitch configuration on the hypervisors, like this: It will then get a list of all nodes and use those to communicate. Although this works, […]

Shared via my feedly reader

Sent from my iPhone

OpenDaylight Developer Spotlight: David Jorm [feedly]

OpenDaylight Developer Spotlight: David Jorm
// OpenDaylight blogs

The OpenDaylight community is comprised of leading technologists from around the globe who are working together to transform networking with open source. This blog series highlights the developers, users and researchers collaborating within OpenDaylight to build an open, common platform for SDN and NFV.

David JormAbout David Jorm

David is a product security engineer based in Brisbane, Australia. He currently leads product security efforts for IIX, a software-defined interconnection company. David has been involved in the security industry for the last 15 years. During this time he has found high-impact and novel flaws in dozens of major Java components. He has worked for Red Hat's security team, led a Chinese startup that failed miserably, and wrote the core aviation meteorology system for the southern hemisphere. In his spare time he tries to stop his two Dachshunds from taking over the house.

What projects in OpenDaylight are you working on? Any new developments to share?

I'm currently primarily working on security efforts across all OpenDaylight projects. We've now got a strong security response team up and running and the next step is to implement a proactive secure engineering program. This program will aim to reduce the number of security issues in OpenDaylight releases and to aid end users with documentation around security configuration and best practices. If any students are interested in contributing to this effort, I'm proposing an OpenDaylight summer internship project:

What do you think is most important for the community to focus on for the next platform release called Lithium?

OpenDaylight is starting to stabilize with powerful new features added all the time. Currently, the documentation effort has not quite kept up with the pace of development. I think it is important for the project to focus on documenting the functionality that already exists and providing clear guides for deploying OpenDaylight across a variety of use cases.

What is the Proof of Concept (PoC) or use case that you hear most about for OpenDaylight?

Managing OpenFlow switches using the OpenDaylight controller seems to be the most common use case. The OpenFlow plugin is advanced and well-documented and I think that this is the use case that we'll primarily see as OpenDaylight is deployed into production in 2015.

Where do you see OpenDaylight in five years?

Over the next couple of years I see OpenDaylight being deployed into production to manage increasingly complex networks of OpenFlow switches. The next step will be connecting these networks to each other and to legacy (non-SDN) IP networks. This will involve the OpenDaylight controller managing layer 3 routing devices. The BGP/LS and PCEP project provides a great starting point for OpenDaylight to manage layer 3 networks and I see this expanding much further.

How would you describe OpenDaylight to a developer interested in joining the community?

I joined the OpenDaylight community by bootstrapping the security response team. Some open source projects can view reports of security issues as an affront or they can ignore them entirely. When I highlighted the pressing need for a security response team, I found the OpenDaylight community to be very supportive. Several existing OpenDaylight developers immediately helped me to get the security response team up and running and to adopt a documented process. I felt welcomed and appreciated. I've participated in several large open source communities and often there is some tension between developers who are employed by rival vendors. My experience in the OpenDaylight community has been free from vendor politics and I think this is a great feature of the community that we should strive to maintain.

What do you hear most from users as a key reason they want SDN?

Proprietary, hardware-based, equipment still powers most networks but the advantages of software-defined networking are coming to the fore. Many people are looking for an alternative that is cheaper, software-based, and gives them the freedom that comes with open source. In the late 1990s to early 2000s, there was a widespread trend to replace proprietary UNIX systems with Linux running on commodity hardware. I see that trend rapidly extending to networking equipment and many people are just waiting for SDN to stabilize and mature before adopting it.

What's your favorite tech conference or event?

Kiwicon, a computer security conference in New Zealand. They combine deep technical content with a fun environment. Last year they brewed their own beer and in past years they've organized for a presenter to arrive on stage on a motorbike. They even let a stuffed toy walrus give a presentation (that's a long story!).


Shared via my feedly reader

Sent from my iPad

Quick and efficient Ceph DevStacking [feedly]

Quick and efficient Ceph DevStacking
// Ceph

Recently I built a little repository on github/ceph where I put two files to help you building your DevStack Ceph.

bash $ git clone $ git clone $ cp ceph-devstack/local* devstack $ cd devstack $ ./

Happy DevStacking!


Shared via my feedly reader

Sent from my iPad

Disney's DevOps Journey: A DevOps Enterprise Summit Reprise [feedly]

Disney's DevOps Journey: A DevOps Enterprise Summit Reprise
// Puppet Labs

Jason Cox was a huge hit at DevOps Enterprise Summit 2014. Video hasn't been published, so we reprise Jason's story of DevOps at Disney in this blog post.


Shared via my feedly reader

Sent from my iPad

Fundamentals of NoSQL Security [feedly]

Fundamentals of NoSQL Security
// Basho

February 23, 2015

Over the last week, for a variety of reasons, the topic of security in the NoSQL space has become a prominent news item. Chief among these reasons was the announcement of a popular NoSQL database having multiple instances exposed to the public internet. From the headlines you might think that NoSQL solutions have inherent security problems. In fact, in some cases, the discussion is positioned intentionally as a relational vs. NoSQL issue. The reality is that NoSQL is not more or less secure than a traditional RDBMS.

The Security of any component of the technology stack is both the responsibility of the vendor providing the technology and those that are deploying it. How many routers are running with the default administrative password still set? Similarly, exposing any database, regardless of type, to the public internet without taking appropriate security precautions, including user authentication and authorization, is a "bad idea." A base level of network security is an absolute requirement when deploying any data persistence utility. For Riak this can include:

  • Appropriate physical security (including policies about root access)
  • Securing the epmd listener port, handoff_port listener port, and the range ports specified in the riak.conf
  • Defining users and optionally, groups (using Riak Security in Riak 2.0)
  • Defining an authentication source for each user
  • Granting necessary permissions to each user (and/or group)
  • Checking Erlang MapReduce code for invocations of Riak modules other than riak_kv_mapreduce
  • Ensuring your client software passes authentication information with each request, supports HTTPS or encrypted Protocol Buffers traffic

If you enable Riak security without having an established functioning SSL connection, all request to Riak will fail because Riak security (when enabled) requires a secure SSL connection. You will need to generate SSL certificates, enable SSL, and establish a certification configuration on each node.

The security discussion does not, however, end at the network. In fact, for those who are familiar with the Open Systems Interconnection model (OSI), a 7 layer conceptual model that characterizes and standardizes the internal functions of a communication system by partitioning it into abstraction layers, (ISO 7498-1) there is a corresponding security architecture reference (ISO 7498-2)…and that is just for the network. It is necessary to take adopt a comprehensive approach to security at every layer of the application stack…including the database.

The process of securing a database, which is only a component of the application stack, requires striking a fine balance. Basho has worked with large enterprise customers to ensure that Riak's security architecture meets the needs of their application deployments and balances the effort required with the security, or compliance, requirements demanded by some of the worlds largest deployments.

NoSQL vs. Relational Security

As enterprises continue to adopt NoSQL more broadly, the question of security will continue to be raised. The reality is simple, it is necessary to evaluate the security of the database you are exploring in the same way that you would evaluate its scalability or availability characteristics. There is nothing inherent to the NoSQL market that makes it less, or more, secure that relational databases. It is true that some relational database, by aegis of their age and maturation, have more expansive security tooling available. However, when adopting a holistic, risk-based approach to security NoSQL solutions — like Riak — are as secure as required.

Security and Compliance

A compliance checklist (be it HIPAA or PCI) details, in varying specificity, the security requirements to achieve compliance. This checklist is subsequently verified through an audit by an independent entity…as well as ongoing internal audits.

So can I use NoSQL in compliant environments?

Without question, Yes. The difficulty of achieving compliance will depend on how the database is configured, what controls it provides for authentication and authorization, and many other elements of your application stack (including physical security of the datacenter, etc). Basho customers have deployed Riak in highly regulated environments and achieved their compliance requirements.

I would encourage you, however, to realize that compliance is an event. The process of securing your application, database, datacenter, etc. is an ongoing exercise. Many, particularly those in the payments industry, refer to this as a "risk-based" approach to security vs. a "compliance-based" approach.

Security and Riak

In nearly all commercial deployments of Riak, Riak is deployed on a trusted network and unauthorized access is restricted by firewall routing rules. This is expected, this is necessary and is sufficient for many use cases (when included as part of a holistic security posture including locking down ports, reasonable policies regarding root access, etc.). Some applications need an additional layer of security to meet business or regulatory compliance requirements.

To that end, in Riak 2.0, the security store changed substantially. While you should — without question — apply network layer security on top of Riak and the systems that Riak runs upon, there are now security features built into Riak that protect Riak itself, not just its network. This includes authentication (the process of identifying a user) and authorization (verifying whether the authenticated user has access to perform the requested operation). Riak's new security features were explicitly modeled after user- and role-based systems like PostgreSQL. This means that the basic architecture of Riak Security should be familiar to most.

In Riak, administrators can selectively control access to a wide variety of Riak functionality. Riak Security allows you to both authorize users to perform specific tasks (from standard read/write/delete operations to search queries to managing bucket types and more) and to authenticate users and clients using a variety of security mechanisms. In other words, Riak operators can now verify who a connecting client is and determine what that client is allowed to do (if anything). In addition, Riak Security in 2.0 provides four options for security sources:

  • trust — Any user accessing Riak from a specified IP may perform the permitted operations
  • password — Authenticate with username and password (works essentially like basic auth)
  • pam — Authenticate using a pluggable authentication module (PAM)
  • certificate – Authenticate using client-side certificates

More detail on the Riak 2.0 Security capabilities are presented in the Security section of the documentation, in particular the section entitled Authentication and Authorization.

With a NoSQL system that provides authentication and authorization, and a properly secured network, you have progressed a long way in reducing the risk profile of your system. The application layer, of course, must still be considered.

Learn More

Relational databases are still a part of the technology stack for many companies; others are innovating and incorporating NoSQL solutions either as a replacement for or alongside existing relational databases. As a result they have simplified their deployments, enhanced their availability, and reduced their costs.

Join us for this webinar where we will look at the differences between relational databases and NoSQL databases like Riak. We will look at why companies choose Riak over a relational database. We will analyze the decision points you should consider when choosing between relational and NoSQL databases and we will look at specific use cases, review data modeling and query options.

This Webinar is being held in two time slots:

Tyler Hannan


Shared via my feedly reader

Sent from my iPad

The New Nutanix Plugin for XenDesktop is Citrix Ready [feedly]

The New Nutanix Plugin for XenDesktop is Citrix Ready
// Citrix Blogs

By Guest Blogger – Andre Leibovici   Nutanix and Citrix have collaborated to create a new innovative way to assign Service Level Agreements (SLA) for virtual desktops.  The Nutanix Plugin for XenDesktop  enables Citrix administrators to answer questions like "how can I ensure my CxO desktops are fully protected or guarantee that the Development team desktops are getting full performance?"   Here are Some More…

Read More


Shared via my feedly reader

Sent from my iPad

How to Prevent a DOS Via User Lockouts at NetScaler Gateway [feedly]

How to Prevent a DOS Via User Lockouts at NetScaler Gateway
// Citrix Blogs

Before we begin let me first say… "All NetScaler Gateway landing page customizations are unsupported. Changing the NetScaler Gateway landing page will cause you to have an unsupported environment. I do not condone malicious attempts to lockout user accounts. The purpose of this article is to highlight a current risk and mitigation steps." Now that that is out of the way, let's start with the…

Read More


Shared via my feedly reader

Sent from my iPad