Wednesday, March 29, 2017

Two in three enterprises at least planning on moving to DevOps, survey finds



----
Two in three enterprises at least planning on moving to DevOps, survey finds
// cloudcomputing-news.net: Latest from the homepage

A new survey from machine data analytics provider Sumo Logic has found more than two thirds of enterprises either plan to adopt DevOps or are already doing so, while four in five are currently or plan to use at least one public cloud service.

The study, titled 'The New Normal: Cloud and DevOps Analytics Tools Reign in the Modern App Era' and put together alongside UBM Technology, found that of the 230 IT operations, app development and information security professionals polled, 67% are utilising software as a service, while 42% are deploying and updating apps more frequently than in the past.

When it came to the types of public cloud service being used, Microsoft Azure (66%) and Amazon Web Services (55%) were the most popular. In another finding that was less than surprising, security was the biggest challenge to cloud adoption, cited by 27% of respondents. Only 6% of those polled thought the security of public cloud was 'excellent', albeit with 55% adding public cloud services were more secure than previously.

"As cloud computing becomes standard in IT organisations, concerns about security and loss of control persist," said Amy Doherty research director for UBM Technology in a statement. "This new study confirms these findings and delves into best practices for how cloud-first companies are leveraging new tools and technologies to speed adoption."

This publication has extensively covered previous DevOps research, with recent studies having a lack of consensus in common. Automation software provider Quali, taking data from various industry events in 2016, found earlier this month that the current vendor ecosystem was open-ended, while almost half of applications in traditional environments were considered 'complex' for cloud.


----

Read in my feedly


Sent from my iPhone

containerd joins the Cloud Native Computing Foundation



----
containerd joins the Cloud Native Computing Foundation
// Docker Blog

Today, we're excited to announce that containerd – Docker's core container runtime – has been accepted by the Technical Oversight Committee (TOC) as an incubating project in the Cloud Native Computing Foundation (CNCF). containerd's acceptance into the CNCF alongside projects such as Kubernetes, gRPC and Prometheus comes three months after Docker, with support from the five largest cloud providers, announced its intent to contribute the project to a neutral foundation in the first quarter of this year.

In the process of spinning containerd out of Docker and contributing it to CNCF there are a few changes that come along with it.  For starters, containerd now has a logo; see below. In addition, we have a new @containerd twitter handle. In the next few days, we'll be moving the containerd GitHub repository to a separate GitHub organization. Similarly, the containerd slack channel will be moved to separate slack team which will soon available at containerd.slack.com

containerd logo

containerd has been extracted from Docker's container platform and includes methods for transferring container images, container execution and supervision and low-level local storage, across both Linux and Windows. containerd is an essential upstream component of the Docker platform used by millions of end users that  also provides the industry with an open, stable and extensible base for building non-Docker products and container solutions.

"Our decision to contribute containerd to the CNCF closely follows months of collaboration and input from thought leaders in the Docker community," said Solomon Hykes, founder, CTO and Chief Product Officer at Docker. "Since our announcement in December, we have been progressing the design of the project with the goal of making it easily embedded into higher level systems to provide core container capabilities. Our focus has always been on solving users' problems. By donating containerd to an open foundation, we can accelerate the rate of innovation through cross-project collaboration – making the end user the ultimate benefactor of our joint efforts."

The donation of containerd aligns with Docker's history of making key open source plumbing projects available to the community. This effort began in 2014 when the company open sourced libcontainer. Over the past two years, Docker has continued along this path by making libnetwork, notary, runC (contributed to the Open Container Initiative, which like CNCF, is part of The Linux Foundation), HyperKit, VPNKit, DataKit, SwarmKit and InfraKit available as open source projects as well.

containerd is already a key foundation for Kubernetes, as Kubernetes 1.5 runs with Docker 1.10.3 to 1.12.3. There is also strong alignment with other CNCF projects: containerd exposes an API using gRPC and exposes metrics in the Prometheus format. containerd also fully leverages the Open Container Initiative's (OCI) runtime, image format specifications and OCI reference implementation (runC), and will pursue OCI certification when it is available. A proof of concept for integrating containerd directly into Kubernetes CRI is currently being worked on. Check out the pull request on github for more technical details.

containerd joins CNCF

Figure 1: containerd's role in the Container Ecosystem

Community consensus leads to technical progress

In the past few months, the containerd team has been active implementing Phase 1 and Phase 2 of the containerd roadmap. Details about the project can be charted in the containerd weekly development reports posted in the Github project.

At the end of February, Docker hosted the containerd Summit with more than 50 members of the community from companies including Alibaba, AWS, Google, IBM, Microsoft, Red Hat and VMware. The group gathered to learn more about containerd, get more information on containerd's progress and discuss its design. To view the presentations, check out the containerd summit recap blog post.

The target date to finish implementing the containerd 1.0 roadmap is June 2017. To contribute to containerd, or embed it into a container system, check out the project on GitHub. If you want to learn more about containerd progress, or discuss its design, join the team in Berlin tomorrow at KubeCon 2017 for the containerd Salon, or Austin for DockerCon Day 4 Thursday April 20th, as the Docker Internals Summit morning session will be a containerd summit.

Additional containerd Resources:


Docker's core container runtime: #containerd joins the @CloudNativeFdn
Click To Tweet


The post containerd joins the Cloud Native Computing Foundation appeared first on Docker Blog.


----

Read in my feedly


Sent from my iPhone

Telia Latvia use Apache CloudStack to accelerate move to Next Generation Teclo



----
Telia Latvia use Apache CloudStack to accelerate move to Next Generation Teclo
// CloudStack Consultancy & CloudStack...

Telia Latvia was founded in 1992 as a telecommunications company with a focus on data communications and internet services. In 2009 they established one of the most modern data centres in the Baltics and in 2013 introduced new generation cloud and CDN services. They now deliver best-in-class cloud and telecommunication services to customers in the Baltics and beyond.

Headquartered in Riga, they are part of the Telia Company group, which is the largest Nordic and Baltic fixed-voice, broadband, and mobile operator by revenue and customer base. Telia Company operates Europe's largest and fastest-growing wholesale IP backbone and is the 10th-largest global mobile group by consolidated customers.

Exploring cloud as a new set of B2B services

Telia Latvia was initially a pure telecommunications and infrastructure company focused on the Latvian enterprise market. Having received inquiries from customers interested in IaaS services and virtualization, they decided to explore cloud as a new set of B2B services.

In 2013, the company set out to build an IaaS/virtual datacentre as both a service for their customers and also as a basis for their own internal infrastructure requirements.

 

Implementing a cloud orchestration tool using Apache CloudStack

Telia Latvia made the decision to implement a cloud orchestration platform. This needed to adhere to their core philosophy – any new service launched must be available to customers via a fully featured self-service portal and any back-end process should be automated to limit human interaction, thereby resulting in the delivery of a speedy service and superior user experience.

The company initially chose a proprietary distribution of CloudStack as the orchestration tool. In early 2016 they approached ShapeBlue to help them migrate to fully open-source Apache CloudStack in order to give them the ability to access the newest features,  easily customize and interact with the CloudStack community.

 

Decreasing time-to-market for new services through more agile development

Telia Latvia now runs both public workloads for customers but their cloud is also used for their own internal workloads and services such as cloud-based surveillance, virtual desktops, backup and storage as a service.

Martins Paurs, CCO of  Telia Latvia explained: "The creation of new services has become much faster. Now we are very flexible and quick to innovate alone or together with our customers. This new capability allows us to be not just a supplier or vendor, but a partner, whom our customers can trust and find the right solution, whatever the problem is."

Adding customer value

"CloudStack enabled us to add value to our customers by offering cloud and VDC services on top of our networks." continues Paurs. "Now we are a telecommunications AND a cloud provider, thus we have all the tools, resources and competencies to deliver the full value chain of IT and telco services to any enterprise customer".

"One of our biggest challenges during the process was to change the mind-set and introduce cloud as a business model in our daily activities. Cloud means no commitment, pay as you use, almost unlimited flexibility – basically all that a standard telco company does not deliver" says Paurs.

Opensource benefits backed by professional support services

Whilst wanting the benefits that a true opensource platform brings, Telia Latvia also needed to ensure that they had reliable, SLA based support of the platform. On this, Paurs said "we were extremely impressed by the team at ShapeBlue. Not only do they have very deep technical knowledge of Cloudstack, which they have gained through being active contributors to the project, they also have a wide range of practical experience and best-practice  to offer us through their relationships with many similar telco companies. We were so impressed by their assistance in our migration that they were the logical choice to provide support for our environment. We have, again, been  highly impressed and look forward to partnering with Shapeblue long into the future".

Changing organisation mindset

On the other benefits of Telia Latvia's deployment of Cloudstack, Paurs commented: "CloudStack is one of the most important success factors for Telia Latvia. It not only helped to develop new services and gain access to new revenue streams in an ever declining telco segment, but most importantly it helped change the organisation's own mindset. Introducing new values in our daily work and interactions with customers, we have become much more productive and agile. Telia Latvia is a new generation telco, this is in a large part thanks to CloudStack."

In summary, Martins concluded "CloudStack is one of the most important success factors for Telia Latvia. It not only helped to develop new services and gain access to new revenue streams in an ever declining telco segment, but most importantly it helped change the organisation's own mindset."

 

To learn more about Apache CloudStack, please visit: http://cloudstack.apache.org

To learn more about Telia Latvia, please contact:

Ildze Magazeina Head of Marketing Telia Latvia Ildze.magazeina@telia.lv +37129433366

To learn more about ShapeBlue, please visit http://shapeblue.com

 


----

Read in my feedly


Sent from my iPhone

Could Octoblu, Slack & Powershell Create the Perfect XenDesktop SysAdmin?



----
Could Octoblu, Slack & Powershell Create the Perfect XenDesktop SysAdmin?
// Citrix Blogs

2017 has been labeled as the year for IoT. I've been exploring how you can integrate Citrix's own IoT platform, Octoblu, with a fully deployed XenDesktop environment.

I'm currently interning as a Software Test Engineer in the RTST (Real Time …

  

Related Stories


----

Read in my feedly


Sent from my iPhone

MCS Storage I/O Optimization On Resource Constrained Systems



----
MCS Storage I/O Optimization On Resource Constrained Systems
// Citrix Blogs

Machine Creation Services Storage I/O Optimization (MCS I/O) trades RAM usage in the Guest Operating System running the Virtual Delivery Agent (VDA) for reduce storage I/O load (IOPS) on central storage.

Depending on workloads, using the default configuration of 256 …

  

Related Stories


----

Read in my feedly


Sent from my iPhone

Watch: Testing at the Edges



----
Watch: Testing at the Edges
// Chef Blog

On March 1st, I presented a live webinar titled "Testing at the Edges". Watch the recording below to hear me explain how to test resource guards that execute system commands or require specific system files. I also demonstrate enabling test coverage and working around recipes that rely on search results from the Chef Server.

In the presentation, I start by enabling test coverage but then explain how even when it reports 100% complete it is not exercising every logical branch in your recipes; specifically in your resource guards. I explore structuring your tests and creating the two scenarios when your resource guards execute commands. When it came time to implement a scenario that involved a resource guard with a file check in Ruby code I work around the pitfalls that come when you start to stub out core system library methods with a helpers methods. I finish with showing you how to stub the results of a Chef Server search query and verify the values were correctly used within a template.

During the presentation I was asked a few questions that I have included at the end of this post.


Q&A

What is the difference in in declaring the execute resource and expecting 'do_nothing' rather than expect(chef_run).to_not run_execute(…)?

When defining unit tests with ChefSpec you are defining the state of the resource collection. When you state that you expect a specific resource to_not take an action you are asking ChefSpec to examine the list of all the resources in the resource collection and ensure that this resource with this action is not present in that list.

A incorrectly specified resource or an incorrectly specified action on a resource that does exist in the resource collection will give you a false positive here. Because it obviously does not exist and obviously did not take that action. In this instance you are not checking for the presence of this unique resource but ensuring that it is absent. And as you can imagine there are a near infinite number of resources not present within the resource collection.

This is not the same as stating that you expect a resource is in the resource collection and it is taking no action. This approach is more correct because you are making a claim about the presence of a resource; not its absence.

How would you approach notifies and subscribes of other resources within a context scenario?

Resources may notify other resources to take action (or other resources subscribe to a resource). This chain of events is often important to evaluate because these resources reacting to other resources will take no action otherwise.

I did not demonstrate this during the webinar but the code repository that I demonstrated out of has a number of examples showing off this functionality. An example of several resources that notify one another in order to accomplish a complex task of pulling down a remote file, extracting its contents (only if a new file was retrieved), setting permissions (only if it was extracted), etc can be found here.

Can I set ServerRunner.new(platform: 'windows') – if I'm running Linux machine?

When using ChefSpec to unit test your cookbook you can definitely evaluate different platforms. ChefSpec employs a gem named Fauxhai. Fauxhai provides a large collection of sample Ohai data for multiple platforms. When I first started working with ChefSpec I had a hard time knowing which platforms were supported and the platform versions.

The post Watch: Testing at the Edges appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Story Mapping with Jeff Patton at ChefConf



----
Story Mapping with Jeff Patton at ChefConf
// Chef Blog

I recently had the pleasure of attending a 2-day class by Jeff Patton, author of User Story Mapping: Discover the Whole Story, Build the Right Product, hosted by our #ChefFriends Jeff Hackert, VP of Engineering at Soylent.

We worked in small groups to practice story mapping techniques using a game designed to simulate adding a feature to an existing product. I learned that the whole organization should be concerned with product thinking and ownership and should not be shouldered by a single individual.

At ChefConf 2017, we are thrilled to have Jeff Patton lead a workshop on Story Mapping. In his own words:

A story map is a simple way to visualize your product idea from your users' perspective. Mapping your product's story uses the same approach scriptwriters use to think through a movie or TV story idea. It's fast, collaborative, and telling your product's story helps you spot the holes in your thinking. Once created, a map lets you think through options and alternative ideas that'll make your product better. It's easy to slice out what you think is a smallest viable product, and to identify the next experiment that'll help you validate your product concept. In this workshop, you'll learn story mapping by building a simple map collaboratively with others. You'll learn how to use story maps to make sense of how users and customers do things today, and how they might do things better with your product. You'll learn how to use story maps to drive Lean Startup-style experimentation, as well as heads-down Agile software development.

I highly recommend this workshop for anyone wanting to improve their product skills. Jeff's way of teaching is so dynamic, you'll love what you're learning.

Register today! https://chefconf.chef.io/2017/

The post Story Mapping with Jeff Patton at ChefConf appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Upgrading to Chef Client 13 – Answers to all your questions



----
Upgrading to Chef Client 13 – Answers to all your questions
// Chef Blog

It's great to see how much excitement there is for the upcoming release of Chef Client 13 on April 10th! We've had a bunch of questions about this release. I'll address some of the frequently asked questions here so you'll be prepared to upgrade.

How do I get Chef Client 13 today?

Builds are flowing in the current channel for the very latest chef client 13 hotness. You can download them manually from https://downloads.chef.io/chef/current/ . To use them with Test Kitchen, add the following to your .kitchen.yml:

provisioner:       product_name: chef      channel: current

What's the best way to test my cookbooks?

Running Chef Client 12.19 is the most informative way to test your cookbooks. That will tell you, with links to the documentation, which deprecated features you're using. Once you have a cookbook with no deprecation errors, add a suite in test kitchen that installs the latest Chef Client 13 build and test there too.

I've found a bug in Chef Client 13!

We'd love to hear about it. While each current build runs through an extensive test suite on many platforms, we can't test every cookbook ever made! If you do have a bug, please file an issue at Chef's GitHub issues page with the information we request.

What should I do if a community cookbook I depend on has deprecation warnings?

The very best thing you can do is to submit a Pull Request fixing it. All cookbook maintainers are working through upgrading to Chef Client 13, and if you can help one of them out then you're doing good for everyone. If you can't do that, then definitely submit an Issue telling the maintainer that there's a deprecation warning. You can find the Issues URL and the Source URL in Supermarket for each cookbook.

Chef's dedicated Cookbook Engineering team are ensuring that all the cookbooks that Chef makes available are error free on Chef 13 in time for the launch, but do file issues or contact them in the #cookbook-engineering channel on our Community Slack.

What versions of Chef Server are compatible with Chef Client 13?

All versions of Chef Server 12 are compatible with Chef Client 13. As always, we recommend that our users should upgrade to the latest release of Chef Server to ensure maximum performance and stability.

When will support end for Chef Client 11 and 12

We always support the last major version of Chef, so currently we support Chef Client 11 and 12. When we release Chef Client 13, Chef Client 11 will be End Of Life, but Chef Client 12 will continue to be supported until Chef Client 14 is released in April 2018.

End Of Life means that we will no longer provide any new releases of Chef Client 11, but Chef Server and Hosted Chef will continue to allow users of Chef Client 11 to synchronise their nodes. For Hosted Chef, we will announce a sunset time for support of Chef Client 11 separately.

When will AWS OpsWorks support Chef 13?

AWS OpsWorks for Chef Automate will support Chef 13 when it's released; you'll need to update your cookbooks as usual. AWS OpsWOrks Stacks <placeholder>

Will there be changes to developing Custom Resources in Chef Client 13?

We're not intending to make major changes to developing Custom Resources. We've focussed heavily on ensuring correctness and safety when developing resources, and on removing some of the sharper edges.

We've made a deprecations page that specifically lists all the changes we've made to the syntax.

How should we upgrade from Chef Client 11?

You should upgrade to Chef Client 12 first. The changes between 11 and 12 are summarised at https://docs.chef.io/upgrade_client_notes.html, but mostly relate to enhanced validation of SSL certificates by the client. Once your nodes are able to communicate with the Chef Server, you can proceed with the upgrade to Chef Client 13.

Can you do multiple packages with specific versions?

Yes!

package %w{ foo bar baz } do      version [ 1 2 3]    end

will install foo with version 1, bar with version 2, and baz with version 3.

If you have any further questions regarding the Chef Client 13 upgrade, please join us in person at ChefConf, online in the Community Slack, or on our forums.

The post Upgrading to Chef Client 13 – Answers to all your questions appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Tuesday, March 28, 2017

Disable TELNET! Cisco finds 0-Day in CIA Dump affecting over 300 Network Switch Models



----
Disable TELNET! Cisco finds 0-Day in CIA Dump affecting over 300 Network Switch Models
// The Hacker News

Cisco is warning of a new critical zero-day IOS / IOS XE vulnerability that affects more than 300 of its switch models. The company identified this highest level of vulnerability in its product while analyzing "Vault 7" — a roughly 8,761 documents and files leaked by Wikileaks last week, claiming to detail hacking tools and tactics of the Central Intelligence Agency (CIA). The vulnerability

----

Read in my feedly


Sent from my iPhone

Vulnerability Spotlight: Certificate Validation Flaw in Apple macOS and iOS Identified and Patched



----
Vulnerability Spotlight: Certificate Validation Flaw in Apple macOS and iOS Identified and Patched
// Talos Blog

Most people don't give much thought to what happens when you connect to your bank's website or log in to your email account. For most people, securely connecting to a website seems as simple as checking to make sure the little padlock in the address bar is present. However, in the background there are many different steps that are taken to ensure you are safely and securely connecting to the websites that claim they are who they are. This process includes certificate validation, or making sure that the servers that users are connecting to present "identification" showing they are legitimate. This helps to protect users from fraudulent servers that might otherwise steal sensitive information.

Due to the sensitive nature of this process, software vulnerabilities that adversely impact the security of certificate validation could have major consequences. Unfortunately, digital systems are complex and bugs are an inevitable reality in software development. Identifying vulnerabilities and responsibly disclosing them improves the security of the internet by eliminating potential attack vectors. Talos is committed to improving the overall security of the internet and today we are disclosing TALOS-2017-0296 (CVE-2017-2485), a remote code execution vulnerability in the X.509 certificate validation functionality of Apple macOS and iOS. This vulnerability has been responsibly disclosed to Apple and software updates have been released that address this issue for both macOS and iOS.

Vulnerability Details


TALOS-2017-0296 (CVE-2017-2485) was identified by Aleksandar Nikolic of Talos.

A use-after-free vulnerability in the X.509 certificate validation functionality of Apple macOS and iOS has been identified which could lead to arbitrary code execution. This vulnerability manifests due to improper handling of X.509v3 certificate extensions fields. A specially crafted X.509 certificate could trigger this vulnerability and potentially result in remote code execution on the affected system.

On Apple macOS and iOS, most client applications (e.g. Safari, Mail.app, Google Chrome) use the built in system certificate validation agent to validate a X.509 certificate. An application that passes a malicious certificate to the certificate validation agent could trigger this vulnerability. Possible scenarios where this could be exploited include users connecting to a website which serves a malicious certificate to the client, Mail.app connecting to a mail server that provides a malicious certificate, or opening a malicious certificate file to import into the keychain.

For the full details, please read our vulnerability report.

Talos has confirmed macOS Sierra 10.12.3 and iOS 10.2.1 are vulnerable. Older versions of macOS and iOS are likely affected. However, Talos has not verified that they are.

Coverage


Talos has developed the following Snort rules to detect attempts to exploit this vulnerability. Note that these rules are subject to change pending additional vulnerability information. For the most current information, please visit your FireSIGHT Management Center or Snort.org.

Snort Rule: 41999

Protecting Customers


Bugs are an inevitable part of software development. With the complexity of digital systems only due to increase, identifying bugs that are security issues will remain a major challenge that Talos will continue to undertake. By researching ways to identify vulnerabilities and responsibly disclosing them, we can improve the security of our customer's networks and the entire internet.

For other vulnerabilities Talos has disclosed, please visit to our Vulnerability Report Portal: http://www.talosintelligence.com/vulnerability-reports/

To review our Vulnerability Disclosure Policy, please refer to our policy here:
http://www.cisco.com/c/en/us/about/security-center/vendor-vulnerability-policy.html


----

Read in my feedly


Sent from my iPhone

XenCenter 7.1 update now available!



----
XenCenter 7.1 update now available!
// Latest blog entries

A hotfix (XS71E001) has been released for customers using XenCenter as the management console for their XenServer 7.1 virtual environments. 

This hotfix offers improvements in XenCenter UI responsiveness, as well as several fixes associated with host health check analysis, status reports and updates. Additional information pertaining to this hotfix can be found here.

As always, we encourage customers read the hotfix release notes and install the hotfix to avoid any of the issues described in the notes.

 


----

Read in my feedly


Sent from my iPhone

Linux networking: It’s not just SDN



----
Linux networking: It's not just SDN
// Cumulus Networks Blog

Oftentimes, Cumulus Linux gets confused for an SDN (software-defined networking) solution. In conversations with potential customers, I've noticed that some of them find it difficult to distinguish between SDN, open networking and Cumulus Linux. When I talk to network engineers, I start by clarifying the SDN buzzword head on. The term gets overused, and is often defined by other confusing acronyms or marketing jargon. To complicate things further, SDN is often thought of as equivalent to OpenFlow, which is flawed in my opinion.

What is SDN?

If I were to more accurately describe SDN based on my experiences in the networking industry, I would define it more broadly. Instead of defining SDN as a specific solution (such as OpenFlow), I define SDN as a highly automatable and programmable network infrastructure.

What SDN providers exist today?

  • OpenFlow: Many companies and communities drive OpenFlow solutions, but today there is no guarantee any one solution can interoperate with any other.
  • Proprietary or vendor-specific: Solutions such as Cisco's ACI and Juniper Contrails are closed solutions that are positioned as SDN. Arguably, certain OpenFlow solutions can fall under here as well since they don't all adhere to an OpenFlow standard.
  • Network virtualization with technologies like VXLAN. Cumulus Networks believes that network virtualization (VMware NSX, Midokura MidoNet, Cumulus Networks EVPN, and even Open Contrail) is the way forward for this type of SDN. To learn more about network virtualization, refer to our documentation on the subject.

SDN vs. Linux diagram

 

These are all different network solutions that solve different problems. However, they all rely heavily on network automation to operate wherever they are useful, and they all involve network infrastructure in one way or another as well. For example, Midokura MidoNet is heavily geared towards OpenStack deployments whereas VMware NSX is geared towards VMware vSphere deployments. They both can use VXLAN and both support multi-tenancy but are fundamentally different products for different customers.

How does Cumulus Linux differ from SDN?

Cumulus Linux is a NOS (Network Operating System) that enables industry standard silicon from Broadcom and Mellanox. Cumulus Linux is not SDN by itself but enables SDN solutions through being a highly automatable operating system, using standards based approaches for interoperability and allowing multiple ways of configuring network virtualization. SDN is an ambiguous term that can be solved multiple ways, and Cumulus Linux is the platform that can enable the method that customers choose for their network. To make a comparison to the retail world, we want to be the Android operating system that enables the Apps (SDN) that customers want to run.

Enter NetDevOps (or DevOps for network infrastructure), which is just another way to highly automate network infrastructure. There's no special sauce, no secret app — it's just DevOps methodologies combined with a networking mindset. Cumulus Linux abstracts and maintains the configuration, and adds continuous integration and testing.

Cumulus Networks believes in being vendor agnostic, developing open solutions and leveraging standard DevOps automation tools like Ansible, Chef, Puppet and Salt. This is why we identify ourselves with other open source projects, rather than relying on marketing smokescreens.

The benefit of this is that unlike the SDN options, Cumulus Networks allows you to completely customize your network based on your needs and your budget. You can leverage existing automation tools, existing talent and existing processes to fully automate a flexible, web-scale network

In the end, the power of Cumulus Networks is that it's just Linux — no proprietary APIs, no proprietary CLIs — enabling network engineers and system administrators to speak the same language.

If you want to read more on NetDevOps and what is possible, refer to DevOps Tools for Modern Data Centers.

The post Linux networking: It's not just SDN appeared first on Cumulus Networks Blog.


----

Read in my feedly


Sent from my iPhone

VRF for Linux — a contribution to the Linux Kernel



----
VRF for Linux — a contribution to the Linux Kernel
// Cumulus Networks Blog

If you're familiar with Linux, you know how important and exciting it can be to submit new technology that is accepted into the kernel. If you're not familiar with Linux, you can take my word for it (and I highly suggest you attend one of our bootcamps). Many networking features are motivated by an OS for switches and routers, but most if not all of those features prove useful for other use cases as well. Cumulus Networks strives for a uniform operating model across switches and servers, so it makes sense for us to spend the time and effort getting these features into upstream code bases. An example of this effort is Virtual Routing & Forwarding (VRF) for Linux.

I joined Cumulus Networks in June 2015 to work on a VRF solution for Linux —to create an implementation that met the goals we wanted for Cumulus Linux and was acceptable to upstream maintainers for Linux as a whole. That solution was first available last year with Cumulus Linux 3.0 and because of the upstream push that solution is rolling out in general OS distributions such as Debian Stretch and Ubuntu 16.

This post is a bit long, so I start with a high level overview — key points that every reader should take away from this article. I hope you get at least that far to understand the history behind VRF and why this innovation is important. After that I'll do a deep dive on the shortcoming of pre-existing Linux routing facilities and the solution that Cumulus Networks spearheaded.

A bird's eye view

The concept of VRF was first introduced around 1999 for L3 VPNs, but it has become a fundamental feature for a networking OS. VRF provides traffic isolation at layer 3 for routing, similar to how you use a VLAN to isolate traffic at layer 2. Think multiple routing tables. Most networking operating systems can support 1000's of VRF instances, giving customers flexibility in deploying their networks.

Over the years, there have been multiple attempts to add proper support for VRF to the Linux kernel, but those attempts were rejected by the community. The Linux kernel has supported policy routing and multiple FIB tables going back to version 2.2 (also released in 1999), but, as I discuss below, multiple FIB tables is but one part of a complete VRF for Linux solution.

Another option that emerged around 2009 is using a network namespace as a VRF. Again, I'll get into this in more detail later, but network namespaces provide a complete isolation of the networking stack which is overkill (i.e. overhead) for VRF (a Layer 3 separation) and the choice of a namespace has significant consequences on the architecture and operational model of the OS.

Cumulus Networks tried all these options and even a custom kernel patch to implement VRF for Linux, but in the end, all of them fell a bit flat. We needed a better solution and decided to spearhead the development of the feature. After many months of hard work, blood, sweat and tears, we developed a solution that works seamlessly with Linux and was accepted into the Linux Kernel. Our solution for VRF is both resource efficient and operationally efficient. It does not require an overhaul in how programs are written to add VRF support or in how the system is configured, monitored and debugged. Everything maintains a logical and consistent view while providing separation of the routing tables.

Because of our commitment to open networking, the VRF for Linux solution is now rolling out in OS distributions allowing it to be used for everything: routing on the host, servers, containers, and switches. Based on the number of inquiries as well patches and feature requests, the end result appears to be a hit for both networking vendors and Linux users.

View from the ground

So now that you know the high level summary, let's zoom in and look at the details from a Linux perspective. Until the recent work done by Cumulus Networks, Linux users had three choices for VRFs: policy routing and multiple FIB tables, network namespace or custom kernel patches.

Multiple FIB tables and FIB rules fall short of a VRF for Linux

Linux has supported policy routing with multiple routing tables and FIB rules to direct lookups to a table since kernel version 2.2 released in January 1999. As many people have noted, you can kind of, sort of, get a VRF-like capability with them, after all VRF is essentially multiple routing tables, but it is an ad hoc solution at best and does not enforce the kind of isolation one expects for a VRF.

A major shortcoming with this approach is the lack of an API for a program to specify which VRF to use for an outgoing connection or an API for a program to learn the table (VRF) for incoming connections and messages. Further, this approach lacks any strong binding between interfaces and FIB tables. Sure, FIB rules can be installed to direct packets received on an interface to a specific table, but that does not work for locally originated traffic. And, FIB rules per interface do not scale as the number of interfaces increases (physical network interfaces, vlan sub-interfaces, bridges, bonds, etc). The rules are evaluated linearly for each FIB lookup, so an increasing rule set has a significant impact to performance.

Other shortcomings include lack of support for overlapping or duplicate network addresses across VRFs; it is more difficult (near impossible) to program hardware for hardware offload of forwarding and ensure consistency between software and hardware programming; and there is no way to have proper source address selection on egress traffic especially with the common practice of VRF-local loopback addresses.

Fundamentally, this approach is missing the ability to tie network interfaces into L3 domains with a programmatic API.

Network Namespace as a VRF? Just say No

Network namespace was introduced to the Linux networking stack in 2008 and "matured" in 2009. Since then the response to queries about VRF for Linux was to use a network namespace. While a network namespace does provide some of the properties needed for VRF, a namespace is the wrong construct for a VRF. Network namespaces were designed to provide a complete isolation of the entire network stack — devices, network addresses, neighbor and routing tables, protocol ports and sockets. Everything networking related is local to a network namespace, and tasks within Linux can be attached to only one network namespace at a time.

VRF for linux

Network Namespaces Provide Total Stack Segmentation.

VRF on the other hand is a network layer feature and as such should really only impact FIB lookups. While the isolation provided by a namespace includes the route tables, a network namespace is much more than that, and the 'more than that' is the overhead that has significant impact on the software architecture and the usability and scalability of deploying VRF as a network namespace.

VRF for Linux 2

VRF is a Layer 3 Segmentation

Let's walk through a few a simple examples to highlight what I mean about impact on the software architecture and operational model of a network namespace as VRF.

Because a process in Linux is limited to a single network namespace, it will only see network devices, addresses, routes, even networking related data under /proc and /sys local to that namespace. For example, if you run 'ip link list' to list network devices, it will only show the devices in the namespace in which it is run. This command is just listing network interfaces, not transmitting packets via the network layer or using network layer sockets. Yet, using a namespace as a VRF impacts the view of all network resources by all processes.

A NOS is more than just routing protocols and programming hardware. You need supporting programs such as lldpd for verifying network connectivity, system monitoring tools such as snmpd, tcollector and collectd and debugging tools like 'netstat', 'ip', 'ss'. Using a namespace as a VRF has an adverse impact on all of them. An lldpd instance can only use the network devices in the namespace in which lldpd runs. Therefore, if you use a network namespace for a VRF, you have to run a separate instance of lldpd for each VRF. Deploying N-VRFs means starting N-instances of lldpd. VRF is a layer 3 feature, yet the network namespace choice means users have to run multiple instances of L2 applications.

That limitation applies to system monitoring applications such as snmpd, tcollector and collectd as well. For these tools to list and provide data about the networking configuration, statistics or sockets in a VRF they need a separate instance per VRF. N-VRFs means N-instances of the applications with an additional complication of how the data from the instances are aggregated and made available via the management interface.

And these examples are just infrastructure for a network OS. How a VRF is implemented is expected to impact network layer routing protocols such as quagga/bgp. Modifying them to handle the implementation of a VRF is required. But even here the choice is either running N-versions of bgpd or modifying bgpd to open a listen socket for each VRF which means N-listen sockets. As N scales into the 100's or 1000's this is wasted resources spinning up all of these processes or opening listen sockets for each VRF.

Yes, you could modify each of the code bases to be namespace aware. For example, as a VRF (network namespaces) is created and destroyed the applications either open socket local to the namespace, spawn a thread for the namespace or just add it to a list of namespaces to poll. But there are practical limitations with this approach – for example the need to modify each code base and work with each of the communities to get those changes accepted. In addition, it still does not resolve the N-VRF scalability issue as there is still 1 socket or 1 thread per VRF or the complexity of switching in and out namespaces. Furthermore, what if a network namespace is created for something other than a VRF? For example, a container is created to run an application in isolation, or you want to create a virtual switch with a subset of network devices AND within the virtual switch you want to deploy VRFs? Now you need a mechanism to discriminate which namespaces are VRFs. This option gets complicated quick.

And then there is consideration of how the VRF implementation maps to hardware for offload of packet forwarding.

I could go on, but hopefully you get my point: The devil is in the details. Many people and many companies have tried using network namespaces for VRFs, and it has proven time and time again to be a square peg for a round hole. You can make it fit with enough sweat and tears, but it really is forcing a design that was not meant to be. Network namespaces are great for containers where you want strict isolation in the networking stack. Namespaces are also a great way to create virtual switches, carving up a single physical switch into multiple smaller and distinct logical switches. But network namespaces are a horrible solution when you just want layer 3 isolation.

Networking vendors and proprietary solutions

To date, traditional networking vendors have solved the VRF challenge in their own way, most notably by leveraging user space networking stacks and/or custom kernel patches. These proprietary solutions may work for their closed systems, but the design choice does not align with open networking and the ability to use the variety of open source tools. Even though Linux is the primary OS used by many of these vendors, they have to release SDKs for third party applications. Software that otherwise runs on millions of Linux devices and is written to standard POSIX/Linux APIs has to be modified or hooked with preloaded libraries to run in these proprietary networking environments.

As a company committed to Open Networking, Cumulus Networks wanted a proper solution for VRF not just for Cumulus Linux but for Linux in general. We wanted a common design and implementation across all devices running Linux, including network switches and servers running Routing on the Host.

The missing piece

As mentioned earlier, the Linux networking stack supports multiple FIB tables, and it has most of what is needed to create a VRF. What it lacked (until the recent work) was a formal, programmatic construct to make it complete — some glue to bring the existing capabilities together with a consistent API and in a way that only impacts applications that use Layer 3 networking.

In early 2015, our VP of engineering, Shrijeet Mukherjee, had an idea: why not model a VRF using a netdevice that correlates to a FIB table? Netdevices (or netdevs) are a core construct in the Linux networking stack, serving as an anchor for many features — firewall rules, traffic shaping, packet captures and of course network addresses and neighbor entries. Enslaving network interfaces to the VRF device makes them part of the L3 domain providing the strong binding between interfaces and tables. Linux users are already familiar with bridges and enslaving devices to make them a part of the bridge domain, so enslaving interfaces to a VRF device has a similar operational model. In addition, the VRF device can be assigned VRF-local loopback addresses for routing protocols, and as a netdevice applications can use well known and established APIs (SO_BINDTODEVICE or IP_PKTINFO) to bind sockets to the VRF domain or to learn the VRF association of an incoming connection.

In short, the VRF device provides the glue for the existing capabilities to create L3 domains and without impacting the rest of the networking stack and without the need to introduce new APIs.

VRF for LInux 3
Getting it done

Cumulus engineers worked with the Linux networking community to get the design and code accepted into the kernel and support added to administration tools such as iproute2, libnl3 and the ifupdown2 interface manager. The result of this effort is a design and implementation that is both resource efficient and operationally efficient. It follows existing Linux networking paradigms from an operational perspective (enslaving devices to the VRF) and for administration, monitoring and troubleshooting (e.g., use of iproute2 commands).

A feature was added to allow listen sockets not bound to a specific VRF device (ie., it has global scope within the namespace) to take incoming connections across all VRFs. This provides a VRF "any" capability for services with connected sockets bound to the VRF domain the connection occurs. Combined with existing POSIX APIs (SO_BINDTODEVICE and IP_PKTINFO) applications can learn the L3 domain (VRF) of connected sockets. This gives users and architects a choice: bind a socket per VRF or use a 'VRF any' socket. Either way it allows a single process instance to efficiently provide service across all VRFs.

With the VRF as a device approach, enslaving a network interwork to a VRF domain only affects layer 3 FIB lookups, so there is no impact to layer 2 applications or the ability for system monitoring tools to work across all VRFs within a network namespace. Finally, by using existing APIs, some commands (e.g., ping -I and traceroute -i) already have VRF support. And of course as a device VRFs nest within network namespaces allowing VRFs within containers and virtual switches.

Open networking and Linux

Now, 20 months later, OS distributions such as Ubuntu 16.04 and 16.10, and Debian Stretch are rolling out VRF support in their kernels and user space components. With this built-in implementation, deploying VRF is standardized across devices using Linux as the OS. You can use the same commands to configure VRFs on your host OS as you do on your switches running Cumulus Linux. Software that supports the bind-to-device API is VRF aware, and the same software can run on hosts, servers and switches.

That is the power of Open Networking and Linux.

"For technical documentation on how to configure VRF in user-space on Cumulus Linux see our technical documentation.

The post VRF for Linux — a contribution to the Linux Kernel appeared first on Cumulus Networks Blog.


----

Read in my feedly


Sent from my iPhone

The first and only NOS to support LinkedIn’s Open19 project



----
The first and only NOS to support LinkedIn's Open19 project
// Cumulus Networks Blog

Today we are excited to announce our support of Open19, a project spearheaded by LinkedIn. Open19 simplifies and standardizes the 19-inch rack form factor and increases interoperability between different vendors' technology. Built on the principles of openness, Open19 allows many more suppliers to produce servers that will interoperate and will be interchangeable in any rack environment.

We are thrilled to be the first and only network operating system supporting Open19 for two reasons. First, this joint solution offers complete choice throughout the entire stack — increasing interoperability and efficiency. We believe the ease of use of this new technology helps expand the footprint of web-scale networking and makes it even more accessible and relevant.

The second reason is that we are continually dedicated to innovation within the open community, and this is one more way we can support that mission. We believe that disaggregation is not only the future but the present (read more about why we think disaggregation is here to stay). When a company like LinkedIn jumped into the disaggregate ring, we knew we wanted to be a part of it.

What is Open19?

The primary component, Brick Cage, is a passive mechanical cage that fits in any EIA 19-inch rack, allowing increased interoperability. Brick Cage comes in 12RU or 8RU form-factors with 2RU modularity.

The Open19 platform is based on standard building blocks with the following specifications:

  • Standard 19-inch 4 post rack
  • Brick cage
  • Brick (B), Double Brick (DB), Double High Brick (DHB)
  • Power shelf—12 volt distribution, OTS power modules
  • Optional Battery Backup Unit (BBU)
  • Optional Networking switch (ToR)
  • Snap-on power cables/PCB—200-250 watts per brick
  • Snap-on data cables—up to 100G per brick
  • Provides linear growth on power and bandwidth based on brick size

As a standardized open solution, Open19 promises 3 to 5 times faster rack level integration, which will result in reduced time to market.

 

Open19 and Cumulus Networks

 

The purpose of Open19:

  • Create an open standard that can fit any 19" rack environment for server, storage and networking
  • Optimize base rack cost
    • Reduce commons by 50%
  • Enable fast rack integration
    • 2-3x faster integration time
  • Build an ecosystem that will consolidate requirements and volumes
    • High adoption level
  • Create a solution that will have applicability for large, medium, and small scale data centers

How Cumulus Networks fits in:

Cumulus Linux is the first and only open network operating system to support the Open19 switch. With common benefits of ease of adoption and customization, while maximizing economics, Open19 and Cumulus Linux are helping customers receive the benefits of web-scale networking while standardizing on EIA 19-inch rack to increase interoperability.

What to do next:

If you're interested in an Open19 Brick cage featuring Cumulus Linux, contact our knowledgeable sales team. And to give Cumulus Linux a spin at zero cost, try Cumulus VX.

If you'd like to learn more about the join solution, check out this solution brief. 

The post The first and only NOS to support LinkedIn's Open19 project appeared first on Cumulus Networks Blog.


----

Read in my feedly


Sent from my iPhone

Migration away from download.cloud.com to download.cloudstack.org may cause problems in exisiting cloudstack installations and versions



----
Migration away from download.cloud.com to download.cloudstack.org may cause problems in exisiting cloudstack installations and versions
// CloudStack Consultancy & CloudStack...

Background

Cloudstack relies on a fixed download site when it requires system templates for the default guest VMs
That download site has historically been download.cloud.com and is being replaced by download.cloudstack.org.

Download.cloudstack.org is now fully functional. The retirement date of download.cloud.com is unknown but expected to be imminent

The issue & behaviour

After the retirement of download.cloud.com, the following issues may be experienced:

  • when installing CloudStack for the first time, failures will occur when downloading the required templates
  • for existing installations of CloudStack, if administrators or users attempt to re-download a template (for example when creating a new zone), failures will occur.

Versions affected

This issue affects Apache CloudStack version 4.9.2 and ALL PRIOR VERSIONS

CloudStack 4.10, due for release imminently, is NOT affected by this issue. Future versions should not be affected by this issue.

Resolution

The following steps will update an existing CloudStack version to use the new download site. This process should also be followed, in advance of installation, when attempting to install a new instance of CloudStack for affected versions.

 

1. list the URLs to update
Locate the 'cloud' database and run this SQL command against it, replacing <user-id> and <your password>

$ echo "SELECT id,url FROM vm_template WHERE url LIKE '%download.cloud.com%' AND NOT removed IS NULL\g" | mysql -u <user-id> -p<your password> cloud

This will return all URLs that CloudStack uses for downloads that are pointing to download.cloud.com

A number of rows should be returned. The following is a sample output

id  url

11  http://download.cloud.com/templates/builtin/centos-7-x86_64.tar.gz

13  http://download.cloud.com/templates/4.2/systemvmtemplate-4.2-vh7.ova

If no rows are  returned you are not affected by this issue; you need to do nothing further. If rows are returned, proceed to step 2

2. Check that ALL templates are present on the new download site.

All templates that were previously located at download.cloud.com should be in an identical location at download.cloudstack.org. However, we advise that you confirm this  by to attempting to manually  download all of  the templates from the same directory at downloads.cloudstack.org

To do this, take every result returned at step 1 and attempt to manually  download them from the same location at downloads.cloudstack.org

Taking the above examples check that you are able to  download:
http://download.cloudstack.org/templates/builtin/centos-7-x86_64.tar.gz

http://download.cloudstack.org/templates/4.2/systemvmtemplate-4.2-vh7.ova

(the files don't actually need to be downloaded at this stage, you are just checking for their existence)

If all templates are present, then continue to step 3. If any are missing (you will receive a 404 error), then contact users@cloudstack.org or your support provider

3. Update the URLs in the vm_template table
Update any URL's that point to download.cloud.com. This can be performed in a SQL editing tool or by running a statement such as:

UPDATE vm_template SET url = REPLACE(url, 'download.cloud.com', 'download.cloudstack.org') WHERE INSTR(url, "download.cloud.com") > 0 AND removed IS NULL;

 


----

Read in my feedly


Sent from my iPhone

XenApp and XenDesktop 7.12 & 7.13 FAQs



----
XenApp and XenDesktop 7.12 & 7.13 FAQs
// Citrix Blogs

On March 1st, we hosted the What's New and What's Coming with Citrix XenApp and XenDesktop Webinar.

If you weren't able to attend, I highly recommend watching the on-demand recording. We have over one hour, jam-packed with …

  

Related Stories


----

Read in my feedly


Sent from my iPhone

Two-Way URL Redirection is Now Available with XenApp and XenDesktop 7.13



----
Two-Way URL Redirection is Now Available with XenApp and XenDesktop 7.13
// Citrix Blogs

Because so many people depend on browser-based apps to get their work done, it's hardly surprising that Citrix customers depend on XenApp to securely deliver those apps to employees. And that's why we continue to focus on making browser-based apps

  

Related Stories


----

Read in my feedly


Sent from my iPhone

Receiver 12.5 for Mac with Improved Graphics Experience



----
Receiver 12.5 for Mac with Improved Graphics Experience
// Citrix Blogs

A key Citrix hallmark has been our commitment to deliver secure access to apps and data on any device, and that's why a lot of people who use Macs depend on XenApp and XenDesktop to get their work done. Our

  

Related Stories


----

Read in my feedly


Sent from my iPhone

Docker Container Compliance with InSpec



----
Docker Container Compliance with InSpec
// Chef Blog

Thanks to its speed and approachability, Docker has done a great deal to make containers popular. Need a quick Redis server? docker run redis and boom, you've got a Redis server. However, compared to traditional hosts and virtual machines, containers are considerably more difficult to reason about. Is my software in the container the version I expect? Is my software configured properly? Am I using a dependency that has a known vulnerability? Depending on how your container was built or from where it was retrieved, you may not be able to easily answer these questions.

Intro to InSpec

InSpec by Chef is an open-source testing framework that uses a human-readable language to define infrastructure tests and compliance controls. InSpec can locally or remotely test a host and report back its compliance status.

InSpec provides an incredibly easy way to answer questions such as:

  • Is package "my_app" installed?
  • Is server "my_service" running?
  • Is the SSH server configured to only accept protocol version 2?
  • Is the "max_allowed_packet" setting in the "mysql" section of "/etc/my.cnf" set to "16M"?

After creating a profile which contains controls using the many resources available in InSpec, the profile becomes Compliance as Code, allowing the automated scanning and reporting of a host's compliance. Scanning a host is as simple as running inspec exec PROFILE_NAME. For example, to scan a host locally using a profile called frontend_alpha:

inspec exec frontend_alpha

To scan a host via SSH using the same profile:

inspec exec frontend_alpha -t ssh://192.168.1.100

InSpec does not need to install any software on a remote host to be able to successfully determine its compliance status.

Compliance for Containers

In addition to scanning a host locally or remotely, InSpec can inspect a Docker container via the Docker API. This provides the ability to make assertions about a live, running container without requiring any changes to the container's contents or build process.

For example, to scan a running container with the ID of fa215305c18e as listed in the output of docker ps:

inspec exec frontend_alpha -t docker://fa215305c18e

This is an incredibly powerful ability. As an organization's compliance controls evolve, containers do not need to be rebuilt to include additional data (such as inventories or additional software), nor do containers' origins need to be traced to determine how it was built or what the build contains. Simply modify the profile with additional controls, and the existing container can be rescanned.

Scanning the Docker Host

It's not enough to simply scan the containers. If the Docker host itself (the host on which all the containers are running) is vulnerable, the security posture of the containers cannot be guaranteed.

The Center for Internet Security's (CIS) Docker 1.11.0 Benchmark is one effort to document a set of best practices for proper Docker host security configuration. However, much like many traditional compliance rules and guidelines, it is provided as a PDF file which, in and of itself, cannot be automated.

Thankfully, the Dev-Sec.io project, to which the InSpec maintainers contribute regularly, has published an open-source InSpec profile that implements the CIS Docker Benchmark. Since InSpec can read profiles using many methods, including via HTTP to a git repository, scanning a Docker host is as simple as:

inspec exec https://github.com/dev-sec/cis-docker-benchmark -t ssh://192.168.123.11

The Dev-Sec.io project also provides a number of other InSpec profiles and Chef cookbooks that can be used to detect and remediate common OS and application hardening concerns.

Publishing and Sharing Profiles

In addition to the profiles created by the Dev-Sec.io project, members of the InSpec community are publishing their own profiles on the Chef Supermarket. On the Supermarket, you may find a profile that already fits your needs or find a profile that can serve as a great starting point. Profiles can depend on other profiles; for more information see the "Profile Dependencies" section of the profiles documentation page.

If you have a profile you think your fellow InSpec community members could benefit from, we'd love to help publish your contribution on the Supermarket!

Wrapping Up

Compliance doesn't need to be an afterthought, and using containers doesn't need to be a roadblock for achieving high degrees of compliance. Learn more about InSpec's easy-to-use framework for creating compliance-as-code and scanning your infrastructure at http://inspec.io, and join us in the #inspec channel in the Chef Community Slack team.

The post Docker Container Compliance with InSpec appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Chef at SCaLE15x



----
Chef at SCaLE15x
// Chef Blog

The 15th annual SCaLE (Southern California Area Linux Expo) was held March 2-5, 2017  at the Pasadena Convention Center. This 4-day gathering of Linux enthusiasts was filled with 200+ speakers and presentations, 90+ exhibits and lots of special events.

On day one, Nathen Harvey presented two sessions: Compliance Automation with InSpec, and Application Automation with Habitat. In the compliance session, Nathen presented the capabilities of Chef Automate compliance server and how to configure it. Attendees were also taught about performing compliance scans against Windows and Linux nodes, repairing any compliance issues with Chef, and running compliance reports. We ended the day with creating and modifying compliance profiles using InSpec.

In the Application automation session, participants learned how to package and run applications with Habitat. During the lesson, attendees worked in small groups to create a Java Application using Tomcat with MongoDB. Try the labs on https://github.com/chef-cft/habdemo and join our Habitat Slack Channel or find a workshop near you.

The learning continued on the second day with 9 main categories of presentations. Nathen Harvey gave a 45-minute presentation about the relationship between application developers and DevOps, and the importance of knowing job boundaries in order to create an effective working environment.

My colleague, Trevor Hess, told me his favorite part of the expo was the history of Linux hallway track. "I learned a lot about how the different distros came to be. I also loved how welcoming the event was – it was really cool to see how many people there were just getting started down the path of the tech industry."

We really enjoyed catching up with #ChefFriends and meeting new members of our community at the expo. Hopefully we'll see many of you again at ChefConf 2017 in Austin, TX, May 22-24.

If you're an LA local, check out the Los Angeles Chef Users Group. We'd love to have you join our Meetup group.

The post Chef at SCaLE15x appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone