Friday, February 27, 2015

#Docker, #Kubernetes, #CoreOS and Big Data in Apache #CloudStack by Sebastien Goasguen

Thursday, February 26, 2015

When Zero Days Become Weeks or Months [feedly]



----
When Zero Days Become Weeks or Months
// A Collection of Bromides on Infrastructure

As February comes to a close we have already seen critical patches from Adobe and Microsoft. Even more concerning, Microsoft has not yet patched a recently disclosed Internet Explorer zero-day. For better or worse, Google's "Project Zero" is putting the pressure on vendors like Microsoft to patch reported vulnerabilities in 90 days before public disclosure, which has been a source of public friction between the companies. With this forced public disclosure, there is now a risk of zero-day attacks stretching into weeks or months.

Patches

It is easy to feel sympathy for vendors like Adobe and Microsoft, who serve as a public face for the challenges of patching zero-day vulnerabilities. These organizations work ceaselessly and thanklessly to fix, test and deploy patches for vulnerable applications on an increasingly shortened timeframe. Still it seems that as soon as one vulnerability is patched, another one is reported like a Sisyphean game of Whack-a-Mole.

Of course, it does not feel like a game for information security teams. The stakes are quite real, but present a complicated dilemma. Even the most Draconian IT teams could not suggest prohibiting the use of these vulnerable applications that are the cornerstone of our modern productivity, yet even the most obtuse information security professional realizes the risk they present.

When critical security vulnerabilities exist in nearly every common application, from document readers to Web browsers, then it should come as no surprise that the frequency of cyber attacks seems to be increasing.

Beyond unpatched vulnerabilities, security patches present their own set of problems. It is dangerous to patch without testing because deploying a patch that breaks your systems could do more damage than if you are attacked. Security patches have never existed in a vacuum.

Where does this leave organizations? There is a risk if you don't have the patch, but there is also a risk that you deploy a patch that you didn't test. Google's "Project Zero" is pushing vendors to create patches, but could this pressure create more risk? How can we guarantee Microsoft can fix a bug in 90 days without introducing a new bug or breaking the software?

These conflicting challenges represent a real opportunity for Bromium to take the pressure off of IT teams and software vendors. By leveraging micro-virtualization to isolate vulnerable software, organizations can remain protect while they take the time to test critical patches before they deploy.



----

Shared via my feedly reader


Sent from my iPhone

The Cloudcast - ByteSized - Monitoring & Logging [feedly]



----
The Cloudcast - ByteSized - Monitoring & Logging
// The Cloudcast (.NET)

Brian and Jonas Rosland (@virtualswede) discuss Logging and Monitoring systems and how they have evolved with open source projects and SaaS services. Music Credit: Nine Inch Nails (www.nin.com)
----

Shared via my feedly reader


Sent from my iPhone

The Cloudcast - ByteSized - Stateful vs. Stateless Apps [feedly]



----
The Cloudcast - ByteSized - Stateful vs. Stateless Apps
// The Cloudcast (.NET)

Brian and Jonas Rosland (@virtualswede) talk about the key architectural and deployment differences between Stateful and Stateless Applications. Music Credit: Nine Inch Nails (www.nin.com)
----

Shared via my feedly reader


Sent from my iPhone

Code Black Drone Crash Pack for $19 [feedly]



----
Code Black Drone Crash Pack for $19
// StackSocial

Prepare for Crash Landings with Extra Rotors, Motors, LED Lights, Rubber Feet & A Crash Bumper
Expires February 24, 2016 23:59 PST
Buy now and get 47% off


KEY FEATURES

Get set for crash recovery with this custom-curated crash pack made exclusively for your Code Black Drone. Don't sweat the crashes - just enjoy the flight.
  • Replace lost or damaged rotors with 16 spare blades
  • Swap out your damaged motor, twice
  • Refresh you drone's armor with an additional blade protector
  • Prepare for night flights w/ replacement LEDs
  • Recoup from crash landing w/ rubber feet

COMPATIBILITY

  • Parts compatible with any Code Black Drone, found here.

PRODUCT SPECS

Includes:
  • 2 motors
  • 16 rotor blades
  • 1 blade-protecting bumper
  • 2 LED lights
  • 4 rubber feet

SHIPPING DETAILS

  • Free shipping
  • Ships to: Continental US & Hawaii only
  • Ship leading time: 1-2 weeks

----

Shared via my feedly reader


Sent from my iPhone

Stone River Academy: Lifetime Subscription for $75 [feedly]



----
Stone River Academy: Lifetime Subscription for $75
// StackSocial

90+ Courses & Counting on All Things Tech: Coding, Design, 3D-Animation & More
Expires March 28, 2015 23:59 PST
Buy now and get 94% off




KEY FEATURES

Access over 2,000 hours of online learning, and master everything from iOS mobile development to graphic design with this lifetime subscription. With 2 to 5 courses added monthly, you are guaranteed to stay on top of the technology learning curve.
  • Lifetime access to all current & newly added content consisting of over 90 courses, 2000+ hours - no future purchase necessary!
  • All level courses available on web & mobile programming, web design, game app creation, 3D-animation & more
  • Instruction for using Bootstrap, Unity 3D, Java, Python, MySQL, node.js, CSS & more
  • Skills for advancing your career or learning a hobby
To view all courses currently offered, click here.

COMPATIBILITY

  • Internet browser required (mobile or desktop)

PRODUCT SPECS

  • Restrictions: access your account on only 1 device at a time
  • Redeem account within 60 days of purchase
  • Updates (course revisions & new courses) included

THE EXPERT

Stone River Academy is passionate about teaching people useful topics in interesting ways. From technology, to business, to education, they deliver high-quality courses that take you from beginner to expert in a matter of hours.

----

Shared via my feedly reader


Sent from my iPhone

CloudStack European User Group Roundup – February 2015 [feedly]



----
CloudStack European User Group Roundup – February 2015
// CloudStack Consultancy & CloudStack...

Once again Trend Micro played host to the CloudStack European user group. And this quarter with Giles Sirett 'working' at CloudStack Day – Brazil, Geoff Higginbottom CTO of ShapeBlue took to the stage to compare the event.

The News

As is customary at the user group, we kicked off with a roundup of news in the CloudStack world since the last meeting. This included a brief look at some of the statistics which had been presented at the CloudStack collaboration conference in Budapest with also information on the upcoming CloudStack days to be held this year.

Our first speaker was Geoff Higginbottom of ShapeBlue. Geoff took the audience through the changes to XenServer's high availability configuration requirements, and the changes to how CloudStack environments are managed required to support the new XenServer requirements.

 [slideshare id=45131436&doc=xenserverhaimprovements-150225102521-conversion-gate01]

Our second speaker was Richard Chart from ScienceLogic. ScienceLogic deliver a hybrid IT monitoring platform enabling organisations to gain holistic end to end visibility across their on-premise and off-premise resources.  Richard focused on the CloudStack plug-in (or Action Pack) for their EM7 product.

[slideshare id=45131322&doc=sciencelogiccloudstacklondonmeetup2015-02-11-150225102310-conversion-gate02]

Next up was Wilder Rodrigues from Schuberg Philis. Wilder is part of the team at Schuberg Philis who have been re-factoring the code for the VPC virtual router.  Wilder took the delegates through the work they have done to clean up the existing code and offer new functionality such as the ability to deploy redundant virtual private cloud routers.

[slideshare id=45131438&doc=demovpcacseugroup-150225102523-conversion-gate01]

The next speaker was Paul Angus from ShapeBlue.  Paul highlighted the major new features and improvements in soon-to-be-released version 4.5 of CloudStack.  Paul then picked out a number of the features to give additional background to them.

 [slideshare id=45131427&doc=whatsnewacs45-150225102513-conversion-gate01]

Last but no means least was Daan Hoogland of Schuberg Philis. Daan explained to the audience the work that is being done by the community to improve the quality of the code produced by the project. Daan finished with a call to arms for contributors to follow the guidelines being developed in order to make it easier for code reviewers to check code and understand its purpose thereby improving the overall standard of code in the project.

 [slideshare id=45131439&doc=london-meetup-feb-2015-150225102523-conversion-gate02]

Thank you to all of the speakers and all of the attendees

The date for the next user group has been provisionally set as 14th May 2015 – We hope to see you there.


----

Shared via my feedly reader


Sent from my iPhone

Bento Box Update for CentOS and Fedora [feedly]



----
Bento Box Update for CentOS and Fedora
// Chef Blog

This is not urgent, but you may encounter SSL verification errors when using vagrant directly, or vagrant through test kitchen.

Special Thanks to Joe Damato of Package Cloud for spending his time debugging this issue with me the other day.

TL;DR, We found a bug in our bento boxes where the SSL certificates for AWS S3 couldn't be verified by openssl and yum on our CentOS 5.11, CentOS 6.6, and Fedora 21 "bento" boxes because the VeriSign certificates were removed by the upstream curl project. Update your local boxes. First remove the boxes with vagrant box remove, then rerun test kitchen or vagrant in your project.

We publish Chef Server 12 packages to a great hosted package repository provider, Package Cloud. They provide secure, properly configured yum and apt repositories with SSL, GPG, and all the encrypted bits you can eat. In testing the chef-server cookbook for consuming packages from Package Cloud, I discovered a problem with our bento-built base boxes for CentOS 5.11, and 6.6.

[2015-02-25T19:54:49+00:00] ERROR: chef_server_ingredient[chef-server-core] (chef-server::default line 18) had an error: Mixlib::ShellOut::ShellCommandFailed: packagecloud_repo[chef/stable/] (/tmp/kitchen/cache/cookbooks/chef-server-ingredient/libraries/chef_server_ingredients_provider.rb line 44) had an error: Mixlib::ShellOut::ShellCommandFailed: execute[yum-makecache-chef_stable_] (/tmp/kitchen/cache/cookbooks/packagecloud/providers/repo.rb line 109) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], but received '1'  ---- Begin output of yum -q makecache -y --disablerepo=* --enablerepo=chef_stable_ ----  ...SNIP    File "/usr/lib64/python2.4/urllib2.py", line 565, in http_error_302  ...SNIP    File "/usr/lib64/python2.4/site-packages/M2Crypto/SSL/Connection.py", line 167, in connect_ssl      return m2.ssl_connect(self.ssl, self._timeout)  M2Crypto.SSL.SSLError: certificate verify failed  

What's going on here?

We're attempting to add the Package Cloud repository configuration and rebuild the yum cache for it. Here is the yum configuration:

[chef_stable_]  name=chef_stable_  baseurl=https://packagecloud.io/chef/stable/el/5/$basearch  repo_gpgcheck=0  gpgcheck=0  enabled=1  gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-packagecloud_io  sslverify=1  sslcacert=/etc/pki/tls/certs/ca-bundle.crt  

Note that the baseurl is https – most package repositories probably aren't going to run into this because most use http. The thing is, despite Package Cloud having a valid SSL certificate, we're getting a verification failure in the certificate chain. Let's look at this with OpenSSL:

$ openssl s_client -CAfile /etc/pki/tls/certs/ca-bundle.crt -connect packagecloud.io:443  CONNECTED(00000003)  depth=3 /C=SE/O=AddTrust AB/OU=AddTrust External TTP Network/CN=AddTrust External CA Root  verify return:1  depth=2 /C=GB/ST=Greater Manchester/L=Salford/O=COMODO CA Limited/CN=COMODO RSA Certification Authority  verify return:1  depth=1 /C=GB/ST=Greater Manchester/L=Salford/O=COMODO CA Limited/CN=COMODO RSA Domain Validation Secure Server CA  verify return:1  depth=0 /OU=Domain Control Validated/OU=EssentialSSL/CN=packagecloud.io  verify return:1  ... SNIP  SSL-Session:      Verify return code: 0 (ok)  

Okay, that looks fine, why is it failing when yum runs? The key is in the python stack trace from yum:

File "/usr/lib64/python2.4/urllib2.py", line 565, in http_error_302  

Package Cloud actually stores the packages in S3, so it redirects to the bucket, packagecloud-repositories.s3.amazonaws.com. Let's check that certificate with openssl:

$ openssl s_client -CAfile /etc/pki/tls/certs/ca-bundle.crt -connect packagecloud-repositories.s3.amazonaws.com:443  depth=2 /C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5  verify error:num=20:unable to get local issuer certificate  verify return:0  ---  Certificate chain   0 s:/C=US/ST=Washington/L=Seattle/O=Amazon.com Inc./CN=*.s3.amazonaws.com     i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3   1 s:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3     i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5   2 s:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5     i:/C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority  ...SNIP  SSL-Session:      Verify return code: 20 (unable to get local issuer certificate)  

This is getting to the root of the matter as to why yum was failing. Why is it failing though?

As it turns out, the latest CA certificate bundle from the curl project appears to have removed two of the Versign certificates, which are used by AWS for https://s3.amazonaws.com.

But wait, why does this matter? Shouldn't CentOS have the ca-bundle.crt file that comes from the openssl package?

$ rpm -qf ca-bundle.crt  openssl-0.9.8e-27.el5_10.4  

Sure enough. What happened?

$ sudo rpm -V openssl  S.5....T  c /etc/pki/tls/certs/ca-bundle.crt  

Wait a second, why is the file different? Well this is where we get back to the TL;DR. In our bento boxes for CentOS, we had a line in the ks.cfg that looked like this:

wget -O/etc/pki/tls/certs/ca-bundle.crt http://curl.haxx.se/ca/cacert.pem  

I say past tense because we've since removed this from the ks.cfg on the affected platforms and rebuilt the boxes. This issue was particularly perplexing at first because the problem didn't happen on our CentOS 5.10 box. The point in time when that box was built, the cacert.pem bundle had the VeriSign certificates, but they were removed when we retrieved the cacert.pem for 5.11 and 6.6 base boxes.

Why were we retrieving the bundle in the first place? It's hard to say – that wget line has always been in the ks.cfg for the bento repository. At some point in time it might have been to work around invalid certificates being present in the default package from the distribution, or some other problem. The important thing is that the distribution's package has working certificates, and we want to use that.

So what do you need to do? You should remove your opscode-centos vagrant boxes, and re-add them. You can do this:

for i in opscode-centos-5.10 opscode-centos-5.11 opscode-centos-6.6 opscode-centos-7.0 opscode-fedora-20  do  vagrant box remove $i  done  

Then wherever you're using those boxes in your own projects – cookbooks with test-kitchen for example, you can simple rerun test kitchen and it will download the updated boxes.

If you'd like to first check if your base boxes are affected, you can use the test-cacert cookbook. With ChefDK 0.4.0:

% git clone https://github.com/jtimberman/test-cacert-cookbook test-cacert  % cd test-cacert  % kitchen test default  

----

Shared via my feedly reader


Sent from my iPhone

ChefConf Talk Spotlight: A sobering journey from Parse / Facebook [feedly]



----
ChefConf Talk Spotlight: A sobering journey from Parse / Facebook
// Chef Blog

We're five weeks away from ChefConf 2015 and we're filling seats fast, so, if you haven't already, register today and guarantee your seat at the epicenter of DevOps.

Continuing our series of spotlights on the tremendous talks, workshops, and sponsors at this year's show, today we focus on Awesome Chef Charity Majors – a production engineer at Parse (now part of Facebook) – and her session, "There and back again: how we drank the Chef koolaid, sobered up, and learned to cook responsibly" – a must-see based on the title alone!

Here's the download on Charity's talk:

When we first began using Chef at Parse, we fell in love with it. Chef became our source of truth for everything. Bootstrapping, config files, package management, deploying software, service registration & discovery, db provisioning and backups and restores, cluster management, everything. But at some point we reached Peak Chef and realized our usage model was starting to cause more problems than it was solving for us. We still love the pants off off Chef, but it is not the right tool for every job in every environment. I'll talk about the evolution of Parse's Chef infrastructure, what we've opted to move out of Chef, and some of the tradeoffs involved in using Chef vs other tools.
This will be a great session for all of you looking for guidance on tooling, or even a friendly debate about the subject. It will also provide patterns of success from some seriously smart and active Chefs over at Parse/Facebook.

As for the presenter herself, Charity is happily building out the next generation of mobile platform technology. She likes free software, free speech and single malt scotch.

See you at ChefConf!

 


----

Shared via my feedly reader


Sent from my iPhone

Chef Server 12.0.5 Released [feedly]



----
Chef Server 12.0.5 Released
// Chef Blog

Today we have released Chef Server 12.0.5. This release includes further updates to provide API support for key rotation, policy file updates, and LDAP-related fixes to user update.

You can find installers on our downloads site.

Updating Users

This release fixes Issue 66. Previously, users in LDAP-enabled installations would be unable to log in after resetting their API key or otherwise updating their user record.

This resolves the issue for new installations and currently unaffected user accounts. However, if your installation has users who have already been locked out, please contact Chef Support (support@chef.io) for help repairing their accounts.

This fix has resulted in a minor change in behavior: once a user is placed into recovery mode to bypass LDAP login, they will remain there until explicitly taken out of recovery mode. For more information on how to do that, see this section of chef-server-ctl documentation.

Key Rotation

We're releasing key rotation components as we complete and test them. This week, we've added API POST support, allowing you to create keys for a user or client via the API.

Key Rotation Is Still A Feature In Progress

Until key rotation is feature-complete, we continue to recommend that you manage your keys via the users and clients endpoints as is done traditionally.

Policyfile

Work on Policyfile support continues to evolve at a rapid pace. This update includes new GET and POST support to named cookbook artifact identifiers. Policyfile is disabled by default, but if you want to familiarize yourself with what we're trying to do, this RFC is a good place to start.

Release Notes

As always you can view the release notes for more details, and the change log for even more.


----

Shared via my feedly reader


Sent from my iPhone

Is #OpenStack the 'Space Pen' to CloudStack's 'Pencil'?

Is #OpenStack the 'Space Pen' to CloudStack's 'Pencil'?

Tuesday, February 24, 2015

Vagrant Azure Provider is now Platform Independent [feedly]



----
Vagrant Azure Provider is now Platform Independent
// MS Open Tech

vagrant_logoThe Vagrant community has been happily using the Vagrant Azure Provider to deploy Vagrant boxes to Microsoft Azure for a while now. However, in order to provision Windows VMs it was necessary to use a Windows client as we used PowerShell remoting for provisioning the box. We are pleased to report that when using the latest Vagrant Azure provider this is no longer the case. Linux and Mac users can now deploy and provision both Windows and Linux VMs on Microsoft Azure using Vagrant.

This improvement come to us courtesy of some great work in the Ruby 'WinRM' Gem as well as a sizable community contribution to the Vagrant Azure plugin from David Justice. We have also taken the opportunity to merge some other community provided enhancements. These include:

  • Support provisioning of Windows VMs using WinRM
  • Allow execution of PowerShell scripts on Windows VM during provisioning
  • Ensure Chef users can make use of Vagrant Omnibus
  • Support for rsync during provisioning
  • Ability to connect a VM to a virtual network
  • Documentation for the Multi Machine feature

As you would expect, this release also fixes a few edge case bugs..

Getting started with this plugin is straightforward for anyone already familiar with Vagrant (version 1.6.0 or higher). First, you need to install the plugin:

vagrant plugin install vagrant-azure

In order to use the plugin you will need to provide an Azure Vagrant box. You can define your own box or use the provided dummy box by adding it to your Vagrant client as follows:

vagrant box add azure https://github.com/msopentech/vagrant-azure/raw/master/dummy.box

This box is configured in the config.vm.provider section of your Vagrantfile as follows:

Vagrant.configure('2') do |config|     config.vm.box = 'azure'     config.vm.provider :azure do |azure|         azure.mgmt_certificate = 'YOUR AZURE MANAGEMENT CERTIFICATE'         azure.mgmt_endpoint = 'https://management.core.windows.net'         azure.subscription_id = 'YOUR AZURE SUBSCRIPTION ID'         # Storage account to us. A new account will be created if blank.         azure.storage_acct_name = 'NAME OF YOUR STORAGE ACCOUNT'            # Azure image to use         azure.vm_image = 'NAME OF THE IMAGE TO USE'         # username defaults to 'vagrant' if not provided         azure.vm_user = 'PROVIDE A USERNAME'          # password: min 8 characters. should contain a lower case letter,          # an uppercase letter, a number and a special character         azure.vm_password = 'PROVIDE A VALID PASSWORD'          # max 15 characters. contains letters, number and hyphens.          # Can start with letters and can end with letters and numbers         azure.vm_name = 'PROVIDE A NAME FOR YOUR VIRTUAL MACHINE'             # Cloud service to use, defaults to same as vm_name.          # Leave blank to auto-generate         azure.cloud_service_name = 'PROVIDE A NAME FOR YOUR CLOUD SERVICE'          # Deployment name (used in portal and CLI tools) defaults to cloud_service         azure.deployment_name = 'PROVIDE A NAME FOR YOUR DEPLOYMENT'          # Data centre to use e.g., West US         azure.vm_location = 'PROVIDE A LOCATION FOR VM'          azure.private_key_file = 'PATH TO YOUR KEY FILE'         azure.certificate_file = 'PATH TO YOUR CERTIFICATE FILE'           # Provide the following values if creating a *Nix VM         azure.ssh_port = 'A VALID PUBLIC PORT'           # Provide the following values if creating a Windows VM         # Open up winrm ports on both http (5985) and http (5986) ports         azure.winrm_transport = [ 'http', 'https' ]          # customize the winrm https port, defaults to 5986         azure.winrm_https_port = 'A VALID PUBLIC PORT'         # customize the winrm http port, defaults to 5985          azure.winrm_http_port = 'A VALID PUBLIC PORT'          # opens the Remote Desktop internal port (53389).          # Without this, you cannot RDP to a Windows VM.         azure.tcp_endpoints = '3389:53389'        end         # SSH username and password used when creating the VM       config.ssh.username = 'YOUR USERNAME'        config.ssh.password = 'YOUR PASSWORD'   end

 

You should, of course, add any provisioning you want using the standard Vagrant provisioning mechanisms. Now you can simply provide the name of the Vagrant Azure provider when running "vagrant up", e.g.:

vagrant up --provider=azure

Having completed these steps you will have a Virtual Machine deployed and configured on Microsoft Azure using Vagrant.


----

Shared via my feedly reader


Sent from my iPhone

Using Packer with Hyper-V and Microsoft Azure [feedly]



----
Using Packer with Hyper-V and Microsoft Azure
// MS Open Tech

Packer LogoPacker is a tool for creating identical machine images, that is static units containing pre-configured operating systems and application software. With Microsoft open technologies latest contributions to the project you can now use Packer to create machine images for Azure and Hyper-V using any computer with Hyper-V enabled.

Pre-baked machine images configured by tools such as Chef and Puppet, have many advantages, but they can be difficult to build. Packer closes this gap by providing a single tool to build across multiple environments. Packer is easy to use and automates the creation of any type of machine image. It is an excellent partner for modern configuration management systems, allowing you to use them to install and configure the software within your Packer-made images.

Our contribution provides:

  1. Azure Builder
  2. Hyper-V Builder
  3. azure-custom-script-extension Provisioner
  4. PowerShell Provisioner
  5. azureVmCustomScriptExtension Communicator
  6. PowerShell Communicator

Our code has been complete and usable for a while; in fact we issued a pull request back in November last year. Unfortunately, because our code targets Go 1.3 while Packer currently targets Go 1.2 it has not yet been merged upstream. The good news is that the Packer team have confirmed that the next version of Packer will target either Go 1.3 or 1.4. It is therefore hoped that our code will be included upstream soon.

In the meantime, users can still use Packer to create machine images for Azure or Hyper-V by either checking out our Packer fork or by merging our pull request locally.

 

 

 


----

Shared via my feedly reader


Sent from my iPhone

Pointing XenServer to a new Open vSwitch Manager (Nicira/NSX) [feedly]



----
Pointing XenServer to a new Open vSwitch Manager (Nicira/NSX)
// Remi Bergsma's blog

Our XenServer hypervisors use Nicira/NSX for Software Defined Networking (orchestrated by CloudStack). We had to migrate from one controller to another and that could easily be done by changing the Open vSwitch configuration on the hypervisors, like this: It will then get a list of all nodes and use those to communicate. Although this works, […]
----

Shared via my feedly reader


Sent from my iPhone

OpenDaylight Developer Spotlight: David Jorm [feedly]



----
OpenDaylight Developer Spotlight: David Jorm
// OpenDaylight blogs

The OpenDaylight community is comprised of leading technologists from around the globe who are working together to transform networking with open source. This blog series highlights the developers, users and researchers collaborating within OpenDaylight to build an open, common platform for SDN and NFV.

David JormAbout David Jorm

David is a product security engineer based in Brisbane, Australia. He currently leads product security efforts for IIX, a software-defined interconnection company. David has been involved in the security industry for the last 15 years. During this time he has found high-impact and novel flaws in dozens of major Java components. He has worked for Red Hat's security team, led a Chinese startup that failed miserably, and wrote the core aviation meteorology system for the southern hemisphere. In his spare time he tries to stop his two Dachshunds from taking over the house.
                

What projects in OpenDaylight are you working on? Any new developments to share?

I'm currently primarily working on security efforts across all OpenDaylight projects. We've now got a strong security response team up and running and the next step is to implement a proactive secure engineering program. This program will aim to reduce the number of security issues in OpenDaylight releases and to aid end users with documentation around security configuration and best practices. If any students are interested in contributing to this effort, I'm proposing an OpenDaylight summer internship project: https://wiki.opendaylight.org/view/InternProjects:Main#Implement_a_secure_engineering_process_for_OpenDaylight.

What do you think is most important for the community to focus on for the next platform release called Lithium?

OpenDaylight is starting to stabilize with powerful new features added all the time. Currently, the documentation effort has not quite kept up with the pace of development. I think it is important for the project to focus on documenting the functionality that already exists and providing clear guides for deploying OpenDaylight across a variety of use cases.

What is the Proof of Concept (PoC) or use case that you hear most about for OpenDaylight?

Managing OpenFlow switches using the OpenDaylight controller seems to be the most common use case. The OpenFlow plugin is advanced and well-documented and I think that this is the use case that we'll primarily see as OpenDaylight is deployed into production in 2015.

Where do you see OpenDaylight in five years?

Over the next couple of years I see OpenDaylight being deployed into production to manage increasingly complex networks of OpenFlow switches. The next step will be connecting these networks to each other and to legacy (non-SDN) IP networks. This will involve the OpenDaylight controller managing layer 3 routing devices. The BGP/LS and PCEP project provides a great starting point for OpenDaylight to manage layer 3 networks and I see this expanding much further.

How would you describe OpenDaylight to a developer interested in joining the community?

I joined the OpenDaylight community by bootstrapping the security response team. Some open source projects can view reports of security issues as an affront or they can ignore them entirely. When I highlighted the pressing need for a security response team, I found the OpenDaylight community to be very supportive. Several existing OpenDaylight developers immediately helped me to get the security response team up and running and to adopt a documented process. I felt welcomed and appreciated. I've participated in several large open source communities and often there is some tension between developers who are employed by rival vendors. My experience in the OpenDaylight community has been free from vendor politics and I think this is a great feature of the community that we should strive to maintain.

What do you hear most from users as a key reason they want SDN?

Proprietary, hardware-based, equipment still powers most networks but the advantages of software-defined networking are coming to the fore. Many people are looking for an alternative that is cheaper, software-based, and gives them the freedom that comes with open source. In the late 1990s to early 2000s, there was a widespread trend to replace proprietary UNIX systems with Linux running on commodity hardware. I see that trend rapidly extending to networking equipment and many people are just waiting for SDN to stabilize and mature before adopting it.

What's your favorite tech conference or event?

Kiwicon, a computer security conference in New Zealand. They combine deep technical content with a fun environment. Last year they brewed their own beer and in past years they've organized for a presenter to arrive on stage on a motorbike. They even let a stuffed toy walrus give a presentation (that's a long story!).


----

Shared via my feedly reader


Sent from my iPad

Quick and efficient Ceph DevStacking [feedly]



----
Quick and efficient Ceph DevStacking
// Ceph

Recently I built a little repository on github/ceph where I put two files to help you building your DevStack Ceph.

bash $ git clone https://git.openstack.org/openstack-dev/devstack $ git clone https://github.com/ceph/ceph-devstack.git $ cp ceph-devstack/local* devstack $ cd devstack $ ./stack.sh


Happy DevStacking!


----

Shared via my feedly reader


Sent from my iPad

Disney's DevOps Journey: A DevOps Enterprise Summit Reprise [feedly]



----
Disney's DevOps Journey: A DevOps Enterprise Summit Reprise
// Puppet Labs

Jason Cox was a huge hit at DevOps Enterprise Summit 2014. Video hasn't been published, so we reprise Jason's story of DevOps at Disney in this blog post.


----

Shared via my feedly reader


Sent from my iPad

Fundamentals of NoSQL Security [feedly]



----
Fundamentals of NoSQL Security
// Basho

February 23, 2015

Over the last week, for a variety of reasons, the topic of security in the NoSQL space has become a prominent news item. Chief among these reasons was the announcement of a popular NoSQL database having multiple instances exposed to the public internet. From the headlines you might think that NoSQL solutions have inherent security problems. In fact, in some cases, the discussion is positioned intentionally as a relational vs. NoSQL issue. The reality is that NoSQL is not more or less secure than a traditional RDBMS.

The Security of any component of the technology stack is both the responsibility of the vendor providing the technology and those that are deploying it. How many routers are running with the default administrative password still set? Similarly, exposing any database, regardless of type, to the public internet without taking appropriate security precautions, including user authentication and authorization, is a "bad idea." A base level of network security is an absolute requirement when deploying any data persistence utility. For Riak this can include:

  • Appropriate physical security (including policies about root access)
  • Securing the epmd listener port, handoff_port listener port, and the range ports specified in the riak.conf
  • Defining users and optionally, groups (using Riak Security in Riak 2.0)
  • Defining an authentication source for each user
  • Granting necessary permissions to each user (and/or group)
  • Checking Erlang MapReduce code for invocations of Riak modules other than riak_kv_mapreduce
  • Ensuring your client software passes authentication information with each request, supports HTTPS or encrypted Protocol Buffers traffic

If you enable Riak security without having an established functioning SSL connection, all request to Riak will fail because Riak security (when enabled) requires a secure SSL connection. You will need to generate SSL certificates, enable SSL, and establish a certification configuration on each node.

The security discussion does not, however, end at the network. In fact, for those who are familiar with the Open Systems Interconnection model (OSI), a 7 layer conceptual model that characterizes and standardizes the internal functions of a communication system by partitioning it into abstraction layers, (ISO 7498-1) there is a corresponding security architecture reference (ISO 7498-2)…and that is just for the network. It is necessary to take adopt a comprehensive approach to security at every layer of the application stack…including the database.

The process of securing a database, which is only a component of the application stack, requires striking a fine balance. Basho has worked with large enterprise customers to ensure that Riak's security architecture meets the needs of their application deployments and balances the effort required with the security, or compliance, requirements demanded by some of the worlds largest deployments.

NoSQL vs. Relational Security

As enterprises continue to adopt NoSQL more broadly, the question of security will continue to be raised. The reality is simple, it is necessary to evaluate the security of the database you are exploring in the same way that you would evaluate its scalability or availability characteristics. There is nothing inherent to the NoSQL market that makes it less, or more, secure that relational databases. It is true that some relational database, by aegis of their age and maturation, have more expansive security tooling available. However, when adopting a holistic, risk-based approach to security NoSQL solutions — like Riak — are as secure as required.

Security and Compliance

A compliance checklist (be it HIPAA or PCI) details, in varying specificity, the security requirements to achieve compliance. This checklist is subsequently verified through an audit by an independent entity…as well as ongoing internal audits.

So can I use NoSQL in compliant environments?

Without question, Yes. The difficulty of achieving compliance will depend on how the database is configured, what controls it provides for authentication and authorization, and many other elements of your application stack (including physical security of the datacenter, etc). Basho customers have deployed Riak in highly regulated environments and achieved their compliance requirements.

I would encourage you, however, to realize that compliance is an event. The process of securing your application, database, datacenter, etc. is an ongoing exercise. Many, particularly those in the payments industry, refer to this as a "risk-based" approach to security vs. a "compliance-based" approach.

Security and Riak

In nearly all commercial deployments of Riak, Riak is deployed on a trusted network and unauthorized access is restricted by firewall routing rules. This is expected, this is necessary and is sufficient for many use cases (when included as part of a holistic security posture including locking down ports, reasonable policies regarding root access, etc.). Some applications need an additional layer of security to meet business or regulatory compliance requirements.

To that end, in Riak 2.0, the security store changed substantially. While you should — without question — apply network layer security on top of Riak and the systems that Riak runs upon, there are now security features built into Riak that protect Riak itself, not just its network. This includes authentication (the process of identifying a user) and authorization (verifying whether the authenticated user has access to perform the requested operation). Riak's new security features were explicitly modeled after user- and role-based systems like PostgreSQL. This means that the basic architecture of Riak Security should be familiar to most.

In Riak, administrators can selectively control access to a wide variety of Riak functionality. Riak Security allows you to both authorize users to perform specific tasks (from standard read/write/delete operations to search queries to managing bucket types and more) and to authenticate users and clients using a variety of security mechanisms. In other words, Riak operators can now verify who a connecting client is and determine what that client is allowed to do (if anything). In addition, Riak Security in 2.0 provides four options for security sources:

  • trust — Any user accessing Riak from a specified IP may perform the permitted operations
  • password — Authenticate with username and password (works essentially like basic auth)
  • pam — Authenticate using a pluggable authentication module (PAM)
  • certificate – Authenticate using client-side certificates

More detail on the Riak 2.0 Security capabilities are presented in the Security section of the documentation, in particular the section entitled Authentication and Authorization.

With a NoSQL system that provides authentication and authorization, and a properly secured network, you have progressed a long way in reducing the risk profile of your system. The application layer, of course, must still be considered.

Learn More

Relational databases are still a part of the technology stack for many companies; others are innovating and incorporating NoSQL solutions either as a replacement for or alongside existing relational databases. As a result they have simplified their deployments, enhanced their availability, and reduced their costs.

Join us for this webinar where we will look at the differences between relational databases and NoSQL databases like Riak. We will look at why companies choose Riak over a relational database. We will analyze the decision points you should consider when choosing between relational and NoSQL databases and we will look at specific use cases, review data modeling and query options.

This Webinar is being held in two time slots:

Tyler Hannan


----

Shared via my feedly reader


Sent from my iPad

The New Nutanix Plugin for XenDesktop is Citrix Ready [feedly]



----
The New Nutanix Plugin for XenDesktop is Citrix Ready
// Citrix Blogs

By Guest Blogger – Andre Leibovici   Nutanix and Citrix have collaborated to create a new innovative way to assign Service Level Agreements (SLA) for virtual desktops.  The Nutanix Plugin for XenDesktop  enables Citrix administrators to answer questions like "how can I ensure my CxO desktops are fully protected or guarantee that the Development team desktops are getting full performance?"   Here are Some More…

Read More


----

Shared via my feedly reader


Sent from my iPad

How to Prevent a DOS Via User Lockouts at NetScaler Gateway [feedly]



----
How to Prevent a DOS Via User Lockouts at NetScaler Gateway
// Citrix Blogs

Before we begin let me first say… "All NetScaler Gateway landing page customizations are unsupported. Changing the NetScaler Gateway landing page will cause you to have an unsupported environment. I do not condone malicious attempts to lockout user accounts. The purpose of this article is to highlight a current risk and mitigation steps." Now that that is out of the way, let's start with the…

Read More


----

Shared via my feedly reader


Sent from my iPad

My Top Picks for Citrix Synergy Sessions and Labs [feedly]



----
My Top Picks for Citrix Synergy Sessions and Labs
// Citrix Blogs

  One of the most important qualities of Citrix Synergy 2015 in Orlando, May 12–14, is the balance it strikes between strategy and tactics, providing both "big picture" business and technical vision along with practical training. Synergy achieves this balance with breakout sessions and labs that welcome a wide variety of presenters—including customers, analysts, and IT vendors—alongside Citrix experts. It's a great mix that sparks…

Read More


----

Shared via my feedly reader


Sent from my iPad

Connector/Python 2.1 [feedly]



----
Connector/Python 2.1
// MySQL - New Product Releases

Connector/Python 2.1 (2.1.1 alpha, published on Monday, 23 Feb 2015)
----

Shared via my feedly reader


Sent from my iPad

Bedphones Sleep Headphones for $39 [feedly]



----
Bedphones Sleep Headphones for $39
// StackSocial

Headphones So Thin & Comfortable, You'll Forget You Have Them On
Expires March 26, 2015 23:59 PST
Buy now and get 33% off




KEY FEATURES

Fall asleep comfortably with Bedphones, the ultra-thin headphones that lie flat on your ears so you can doze off to soothing tunes, podcasts, white noise, and more. The Bedphones' ear-hooks are made from soft, moldable wire to provide maximum comfort for maximum rejuvenation. "Sleeping with a pair of headphones on is, at best, uncomfortable. That is, unless you happen to own a pair of Bedphones," - Engadget
  • Play, pause & skip music tracks w/ the single-button remote
  • Download the sleep app to automatically shut off sound when you fall asleep
  • Adjust the ear-hooks to secure tightly for the night
  • Use w/ included travel case & satin eye mask for optimized sleep
  • Make calls w/ the built-in microphone

COMPATIBILITY

  • Any device w/ standard 3.5mm headphone input

PRODUCT SPECS

  • Less than 1/4" thickness
  • Driver diameter: 23mm
  • Sensitivity: 116 dB/V ±5dB @ 1 kHz
  • Impedance: 32 Ω ±15%
  • Plug: standard 3.5mm stereo, gold-plated
  • Cable length: 59"/150cm
  • Frequency range: 20-20kHz
Includes:
  • Bedphones (white)
  • Carrying case
  • Satin eye-mask
  • Replacement speaker foams

SHIPPING DETAILS

  • Free shipping
  • Ships to: Continental US
  • Shipping lead time: 1-2 weeks

----

Shared via my feedly reader


Sent from my iPad

MakersKit 'Classic Cocktails' Bar Set for $29 [feedly]



----
MakersKit 'Classic Cocktails' Bar Set for $29
// StackSocial

Essential Tools & Recipes for the At-Home Bartender
Expires April 18, 2015 23:59 PST
Buy now and get 34% off



KEY FEATURES

Shake, stir, and muddle your way to delicious homemade cocktails with this must-have bar set. Expect only the finest quality tools from MakersKit-- but remember, they leave the quality cocktails to you. Top 12 Favorite Things of 2014, Sunset Magazine
  • Quart-size vintage-style Mason jar shaker
  • Retro double jigger for accurate measurements
  • Strainer & spouts for a mixologist-style smooth pour
  • Hardwood muddler to grind mint, berries & more
  • 24 delectable recipes, from classic to contemporary

PRODUCT SPECS

Includes:
  • Mason jar shaker, quart-size
  • Stainless steel strainer
  • Double jigger, 1 & 1.5 oz
  • Hardwood muddler
  • Ice tongs
  • Cocktail stir spoon
  • 2 pour spouts
  • Digital how-to recipe guide included

SHIPPING DETAILS

  • Free shipping
  • Ships to: Continental US
  • Shipping lead time: 1-2 weeks

----

Shared via my feedly reader


Sent from my iPad

IBM InterConnect and Chef [feedly]



----
IBM InterConnect and Chef
// Chef Blog

IBMThis week our friends at IBM are hosting their InterConnect 2015 conference and we're pleased to announce expanding (and existing) support for a wide variety of their products. IBM is synonymous with the Enterprise and they have embraced Chef in a big way. By using Chef across your IBM infrastructure, efficiency is improved and risk reduced as you can pick the right environment for your applications. Whether it's AIX, POWER Linux or an OpenStack or SoftLayer Cloud, Chef has you covered by providing one tool to manage them all.

In Chef 12 we officially added AIX support and there has been tremendous interest because many large enterprise customers have a significant investment in the platform. By providing full support for AIX Resources such as SRC services, BFF and RPM packages and other platform-specific features, AIX systems become part of the larger computing fabric managed by Chef. The AIX cookbook expands functionality and there is even a knife-lpar plugin for managing POWER architecture logical partitions.

In addition to supporting AIX on POWER, we're also currently working on providing official Chef support for Linux on POWER for Ubuntu LE and Red Hat Enterprise Linux 7 BE and LE. We plan to release initial Chef client support for all 3 platforms by ChefConf. Once the clients are available the Chef server will be ported to these platforms and we expect to release it early this summer.

Chef is core to IBM's OpenStack offerings and IBM is very active in the Chef OpenStack community. Chef is used to both deploy and consume OpenStack resources through knife-openstack, kitchen-openstack, Chef Provisioning, and OpenStack cookbooks. Support for Heat is under active development and new features are being released and supported all of the time.

IBM's SoftLayer Cloud also has great Chef support. The knife-softlayer plugin allows you to easily launch, configure and manage compute instances in the IBM SoftLayer Cloud. There is a Chef Provisioning plugin for SoftLayer under development and they even have a Ruby API for further integrations.

With the Chef client on AIX, the client and server on Linux on POWER, and nodes being managed on OpenStack and SoftLayer clouds; administrators with IBM systems have many options when it comes to managing their infrastructure with Chef. We've enjoyed working with them and expect to continue making substantial investments integrating IBM's platforms to meet Chef customers' automation needs across diverse infrastructures.


----

Shared via my feedly reader


Sent from my iPad

Pauly Comtois: My DevOps Story (Pt. 3) [feedly]



----
Pauly Comtois: My DevOps Story (Pt. 3)
// Chef Blog

This post concludes our bi-weekly blog series on Awesome Chef Paul Comtois' DevOps Story. You can read the final part below, while part one is here and part two is here. Thank you to Pauly for sharing his tale with us!

Leveling Up the Sys Admins

The last hurdle was that, even with all we'd accomplished, we still weren't reaching the sys admins. I had thought they would be my vanguard, we would charge forward, and we were going to show all this value. Initially, it turned out they didn't want to touch Chef at all! Jumpstart, Kickstart and shell scripts were still the preferred method of managing infrastructure.

About the same time that the release team was getting up to speed, the database team decided that they wanted a way to get around the sys admin team because it took too long for changes to happen. One guy on the database team knew a guy on the apps team who had root access and that guy began to make the changes for the database team with Chef. The sys admins were cut out and the apps team felt resentful because the sys admins weren't doing their job.

That started putting pressure on the sys admins. The app team was saying, "Hey, you guys in sys admin, you can't use that shell script any more to make the change for DNS. Don't use Kickstart and Jumpstart because they only do it once, and we don't have access. We need to be able to manage everything going forward across ALL pods, not one at a time and we need to do it together." It was truly great to see the app team take the lead and strive to help, rather than argue.

So, the app team started to train the sys admins. They'd say, "This is how you make the change. It's super easy. Bring this file up in Vim or Emacs, make the change, save it up to the repository, and it'll push it out on its own in the next Chef client run." The sys admins were amazed. "Really, that's all I have to do?" "Yep, that's all you have to do."

I worked with people to identify the biggest pain points that the systems guys dealt with. We were looking for things they had to manage across all six pods every day, LDAP, DNS, memory tuning and things like that. We showed them how to fix those things with Chef, and after a while they forgot that Chef was even managing it. It became more and more common for them to pop in and make a change, following a lean change management process. They started using the shell scripts less and less. Finally, I said let's get rid of the shell scripts and let Chef replace the bastion host.

That was a very long process. It took months, and honestly none of it was really about tools. It was all cultural. It was all about managing people and their emotions and fears about losing their identities and their responsibilities. The inertia of staying with what they knew was countered not by me, but by the other teams and a shifting cultural approach.

Admittedly, a couple people didn't make it. They couldn't see any value in unifying configurations and streamlining our processes. Some called configuration automation and CI a fad. The result was that they were no longer happy with the new direction. While I don't ever like losing good people, sometimes they must do what is right for themselves and seek a place where they can be happy.

As a leader you need to try to mentor individuals and get them to see the value of what you're doing. I see three stages in these types of cultural changes. At the beginning, you try to get people on board, you try and get them excited, so they can contribute and feel good about it. You engage and empower them and get out of their way. It is critical to communicate openly and be transparent on why the change is needed. Sometimes you will have someone that just doesn't want to change. In this case I believe in "You don't have to agree, but I need you to support the idea." Often they will see the value and benefit over time. If they don't support the idea, then the message is, "Look, this is just not going to work out." We were not able to provide them a place to hide, the organization was too small and everyone had to contribute.

The Big Win

Over time the company grew significantly, and it was bought by an even larger firm. Chef helped make that happen. DevOps helped make that happen. All that rolled back code before Chef caused attrition in customers and burned out employees. The platform would always go down, and customers would have to wait for 6 months for any new feature because it was so painful for us to get product out to the market.

Chef enabled us to have both frequent reliable deployments and a stable production environment. This lead directly to no more code rollbacks and happier, more satisfied customers.

 


----

Shared via my feedly reader


Sent from my iPad

Monday, February 23, 2015

Apple releases second beta of OS X 10.10.3 with focus on new Photos app [feedly]



----
Apple releases second beta of OS X 10.10.3 with focus on new Photos app
// AppleInsider | Apple news and rumors since 1997

Apple on Monday seeded developers with another beta version of its forthcoming OS X 10.10.3 update, asking testers to focus primarily on the new Photos app with additional attention paid to Wi-Fi captive network support and screen sharing.
----

Shared via my feedly reader


Sent from my iPad

Apple to Invest €1.7 Billion in New European Data Centres [feedly]



----
Apple to Invest €1.7 Billion in New European Data Centres
// Apple Inc. Press Releases

CORK, Ireland—February 23, 2015—Apple® today announced a €1.7 billion plan to build and operate two data centres in Europe, each powered by 100 percent renewable energy. The facilities, located in County Galway, Ireland, and Denmark's central Jutland, will power Apple's online services including the iTunes Store®, App Store℠, iMessage®, Maps and Siri® for customers across Europe.

"We are grateful for Apple's continued success in Europe and proud that our investment supports communities across the continent," said Tim Cook, Apple's CEO. "This significant new investment represents Apple's biggest project in Europe to date. We're thrilled to be expanding our operations, creating hundreds of local jobs and introducing some of our most advanced green building designs yet."

Apple supports nearly 672,000 European jobs, including 530,000 jobs directly related to the development of iOS apps. Since the App Store's debut in 2008, developers across Europe have earned more than €6.6 billion through the worldwide sale of apps.

Apple now directly employs 18,300 people across 19 European countries and has added over 2,000 jobs in the last 12 months alone. Last year, Apple spent more than €7.8 billion with European companies and suppliers helping build Apple products and support operations around the world.

Like all Apple data centres, the new facilities will run entirely on clean, renewable energy sources from day one. Apple will also work with local partners to develop additional renewable energy projects from wind or other sources to provide power in the future. These facilities will have the lowest environmental impact yet for an Apple data centre.

"We believe that innovation is about leaving the world better than we found it, and that the time for tackling climate change is now," said Lisa Jackson, Apple's vice president of Environmental Initiatives. "We're excited to spur green industry growth in Ireland and Denmark and develop energy systems that take advantage of their strong wind resources. Our commitment to environmental responsibility is good for the planet, good for our business and good for the European economy."

The two data centres, each measuring 166,000 square metres, are expected to begin operations in 2017 and include designs with additional benefits for their communities. For the project in Athenry, Ireland, Apple will recover land previously used for growing and harvesting non-native trees and restore native trees to Derrydonnell Forest. The project will also provide an outdoor education space for local schools, as well as a walking trail for the community.

In Viborg, Denmark, Apple will eliminate the need for additional generators by locating the data centre adjacent to one of Denmark's largest electrical substations. The facility is also designed to capture excess heat from equipment inside the facility and conduct it into the district heating system to help warm homes in the neighboring community.

Apple designs Macs, the best personal computers in the world, along with OS X, iLife, iWork and professional software. Apple leads the digital music revolution with its iPods and iTunes online store. Apple has reinvented the mobile phone with its revolutionary iPhone and App Store, and is defining the future of mobile media and computing devices with iPad.

Press Contacts:
Josh Rosenstock
Apple
jrosenstock@apple.com
+44 203 284 6045

Kristin Huguet
Apple
khuguet@apple.com
+1 (408) 974-2414
 

----

Shared via my feedly reader


Sent from my iPad

Here are the Results from the Citrix Education Virtualization Pop Quiz [feedly]



----
Here are the Results from the Citrix Education Virtualization Pop Quiz
// Citrix Blogs

  Beginning in late January, Citrix Education ran a four-week Virtualization Pop Quiz; ten questions designed to see just how much you know about XenDesktop and XenApp. Over the course of four weeks, over 500 people headed over to the Citrix Education Facebook page to test their knowledge. Here's what we found out: Turns out this quiz was tough! Out of all the entries, there…

Read More


----

Shared via my feedly reader


Sent from my iPhone

Enabling Next-Generation Virtual Delivery Infrastructures for the Modern Data Center [feedly]



----
Enabling Next-Generation Virtual Delivery Infrastructures for the Modern Data Center
// Citrix Blogs

  Part 1 of a four-part series written by Andy Melmed, Senior Sales Engineer, that offers the Citrix community valuable insight into what the recent acquisition of Sanbolic, a long-time Citrix-Ready partner and pioneer of software-defined storage, means for Citrix, its customers, and the entire IT industry as it continues to embrace the benefits of software-defined storage and the new era of storage commoditization. With…

Read More


----

Shared via my feedly reader


Sent from my iPhone

Power Vault 18,000mAh Portable Battery Pack for $30 [feedly]



----
Power Vault 18,000mAh Portable Battery Pack for $30
// StackSocial

Charge 2 USB-Compatible Devices at Once. Don't Let Battery Life Slow Life Down.
Expires May 24, 2015 23:59 PST
Buy now and get 72% off





KEY FEATURES

Get the max bang for your buck with this robust power bank packed with 18,000mAh of battery juice. Charge your smartphone, tablet, or both at the same time, so you can stay charged on the road.
  • Lightweight & compact
  • Two USB charging stations
  • Rechargeable 18,000mAh battery
  • Sleek aluminum case

COMPATIBILITY

  • Charges USB-compatible devices

PRODUCT SPECS

  • 18,000mAh battery (approximate power)
Includes:
  • 1 Power Vault (blue)

SHIPPING DETAILS

  • Free shipping
  • Shipping lead time: 2-3 weeks
  • Ships to: Continental US & Hawaii only

----

Shared via my feedly reader


Sent from my iPhone

UsingChef Issue 36 [feedly]



----
UsingChef Issue 36
// UsingChef Newsletter Archive Feed

Using Chef Newsletter
View this email in your browser

Using Chef Issue 36


While compiling this newsletter it occurred to me that the chef community is quite a diverse group of people. Windows/*nix Admins, DBAs, Ruby coders and more. Thats what makes it fun to be a part of! On to the news -

Articles

Help test the future of Windows Infrastructure Testing on Test-Kitchen
Windows test kitchen has made huge improvements over the past few months. Go give the preview a try and report bugs.
via Matt Wrock @mwrockx

Hacking AWS OpsWorks
A great overview of OpsWorks from a fairly large operation. They also built a gem opsicle to help manage OpsWorks.
via Andy Fleener @andyfleener

Chef 12.1.0 chef_gem resource warnings
If you use chef_gem resource there are some changes coming in chef 12.1.0.
via Chef Office Hours @ChefOfficeHours

Integrating Chef with OpenStack instances
A good example of using cloud-init to bootstrap chef on OpenStack.
via Dan @

Learning configuration management as a DBA
A tale from a DBA who went from How To docs to managing a DB cluster with chef.
via DB Smasher @dbsmasher

Simplify OpsWorks Dev with Packer
In a follow up to his recently updated Virtualizing AWS OpsWorks with Vagrant Michael Greiling takes a look at how to make developing in OpsWorks easier with Packer.
via Michael Greiling @mikegreiling

A new kind of Resource: the resource cookbook
Chef Resources are incredibly important to creating good, useful, reusable cookbooks. Yet people often don't create them because it's harder than it should be. We need to change that.
via John Keiser @jkeiser

Zap Cookbook
Zap is a very interesting cookbook that removes resources (users, directories, etc) that are not managed by chef.
via Joe Nuspl @JoeNuspl

Recently Released

Tweet of the week

@BonzoESC "wtf this movie isn't about configuration management at all http://t.co/xHNRxvUtQ9"

Copyright © 2015 UsingChef, All rights reserved.


unsubscribe from this list    update subscription preferences 

Email Marketing Powered by MailChimp

----

Shared via my feedly reader


Sent from my iPhone

Chef and Microsoft to Bring Further Automation and Management Practices to the Enterprise [feedly]



----
Chef and Microsoft to Bring Further Automation and Management Practices to the Enterprise
// Chef Blog

New Agreement Empowers Enterprises to Automate Workloads Across On-Premises Data Centers and Microsoft Azure to Become Fast, Efficient, and Innovative Software-Driven Organizations

SEATTLE – Feb 23, 2015 – Today it was announced that Chef and Microsoft Azure have joined forces to provide global enterprises with the automation platform and DevOps practices that increase business velocity to meet customer demand in the digital age. This agreement builds on 12 months of engineering work to integrate Chef's IT automation platform withthe Microsoft stack, helping customers rapidly move Windows and Linux workloads to Azure.

According to research firm IDC, DevOps will be adopted (in either practice or discipline) by 80 percent of Global 1000 organizations by 2019 (IDC MaturityScape Benchmark: DevOps in the United States, Dec. 2014).

Working together, Chef and Microsoft will provide enterprises with the tools, skills, and guidance to make IT innovation more rapid and frequent within Azure. By automating both compute resources and applications, Chef enables developers and operations to best collaborate on rapidly delivering high-quality software and services.

"IT is shifting from being an infrastructure provider to becoming the innovation engine for the new software economy. Key elements of the new, high-velocity IT include automation, cloud, and DevOps," said Barry Crist, CEO, Chef. "Our partnership with Microsoft is about bringing these elements to enterprises in all industries and geographies. This is a big investment for both Chef and Microsoft, bringing to bear the expertise and resources to transform IT into an innovation engine using Microsoft technology."

"Microsoft is excited to extend our work with Chef to help customers rapidly move their workloads into the Azure cloud," said Jason Zander, Corporate Vice President, Microsoft Azure. "Through this collaboration, we are not only enabling faster time to innovation in the cloud, but we are also underscoring Microsoft's commitment to providing a first-class cloud experience for our customers regardless of whether they are using Windows or Linux."

Key components of the collaboration include:

  • Engineering Collaboration: Chef and Microsoft will further enhance native automation experiences for Azure, Visual Studio, and Windows PowerShell DSC users. Microsoft Open Technologies has its own collection of Chef Cookbooks, providing rock-solid code for automating the provisioning and management of compute and storage instances in Azure. 2015 will bring additional deliverables across Windows, Azure, and Visual Studio with a focus on empowering customers to automate heterogeneous workloads and easily migrate them to Azure.
  • Sales Training and Customer Support: Chef will deliver hundreds of hours of DevOps education in Microsoft's expansive ecosystem across industry events, digital channels, and community meetups. Chef will work with Microsoft to enable their field sales organization to support customers embracing automation, DevOps practices, and Microsoft Azure. Microsoft users interested in Chef and DevOps can already access a wealth of content, new online training tutorial on the Windows platform and a webinar series on Automating Azure with Chef.
New Webinar: Automating the Microsoft Stack with Chef

On March 19th, Microsoft's Kundana Palagiri and Chef's Steven Murawski and Michael Ducy will showcase the technical integrations between Chef and Azure. This webinar will demonstrate real-world use cases, providing attendees with a step-by-step guide to achieving the benefits of Chef within Microsoft environments. Register today.

Additional Resources


----

Shared via my feedly reader


Sent from my iPhone