Sunday, April 14, 2019

What Does the New Chef Mean for the Community?


Chef would not exist without its community. Our Open Source community built the foundation upon which every part of Chef stands. Together we continue to evolve and grow and shape each other as Chef moves into its third chapter.

At Chef we firmly believe that Open Source Software – built by a community of contributors across industries – is better quality software.  More eyes on the code, more perspectives, and more contributions make better software. Better software and better community means Chef has a bigger impact on the world. Open Sourcing more of our kit (starting with Automate) means more opportunities for both internal Chef engineers and external contributors to interact with each other, learn from each other, and build on each other's knowledge and experience.

A desire to make managing my infrastructure less painful is what brought me to Chef as an engineer. The community is what made me stay and keeps me here to this day. Every member honestly wants every other member to succeed. We both celebrate and build upon each other's successes — as I learned so clearly as a core maintainer of Supermarket and of Habitat. When any member of the Chef Community thrives, we all thrive. This is unusual for any community. It's shocking for an online community.

Community is not just something that makes us feel good, community is absolutely mandatory at a business, industry, and world level. The technical problems we face as organizations, industries, governments, even as humanity itself are impossible to solve alone. The stakes are too high not to code together, to collaborate and build upon each other's work. The DevOps movement started with the (still) radical concept that managing technology does not have to be as hard and as deeply painful as it has historically been. I remember telling Gene Kim about the first time I read The Phoenix Project. The first half made me cry because it was so familiar, how an organization can be paralyzed and crippled under the weight of both it's technical and cultural debt. This burden is too much for one person, one team, one company, even one industry to bear by themselves. Community means no one is alone.

The reason for commercializing the builds of our software (while keeping the code open source) is to provide the financing for continuing development on these projects as well as the creation of new projects. Our previous "freemium" model provided incentive to keep some code closed, which ultimately was negative for both our software and our community. Our new business model financially incentivizes us to create more Open Source software and more community around that software. The business tension between whether to put resources on the "free" offerings or the "paid" offerings is no more.

What does this mean for members of the Chef Community?

Chef's Open Source codebases are and will remain (including Chef Automate within the month) under the Apache 2.0. License. Anyone is free to fork, build, and distribute our software within the bounds of the Apache 2.0 license (and as long as our trademarks are respected) – for more guidance on this, see our trademark policy and our guidelines for distribution.

What is changing is the license on the distributions/builds of these products produced by Chef Software. Starting with the next major version of all of our products (i.e. Chef 15 and InSpec 4), any distributions created by Chef will require acceptance of new license terms. The license acceptance will be required on use of the product, not on download (you will be able to accept the license automatically through a command line flag, an environmental variable, or having a specific file located on the disk) – which means it is easily automatable. These new license terms will NOT apply to previous versions of our products (i.e. Chef 14 and InSpec 3) – you may continue using these versions with no changes (though support for them will end one year after the release of the next major version).

What's in the new license on distributions?

What does this license say?

  • If you are an individual using Chef's distributions for personal use, you can use them for free
  • If you are an organization that is experimenting or prototyping with Chef's software, you can use Chef's distributions for free
  • If you are an organization using Chef for commercial purposes, you are required to have a commercial relationship with Chef in order to use Chef's distributions.

Does that mean all organizations have to pay money to use Chef's distributions?

  • If you are a non-profit or other "for good" organization (within limitations – guidance on this is coming shortly), you can use Chef's distributions for free
  • If you are an organization who has made a significant contribution to our Open Source projects, you may be eligible to apply for a commercial considerations related to licensing Chef's software. Details on this process will be coming shortly.

What if your organization does not fall into those categories and you don't want to pay for a Chef distribution? You have two options:

  • You can freeze on the previous version of the product (which will be supported for one year after the release of the next major version)
  • You can fork the code, remove trademarks, and build your own distribution (see our guidelines for distribution for more information).

What about forks or community distributions?

A question that has come up frequently since we announced our business model change is what if someone creates a community fork or distro of a Chef product? The answer is we would welcome this with open arms. Anyone is free to fork our products. Anyone is free to build and distribute and even commercialize our products in any way they see fit, as long as our trademark policy is respected. Together, we hope to build upon each other's learnings and successes as a vibrant technical community.

Another question many have asked is "Why doesn't Chef create and run a community fork or distro, like RedHat does with CentOS?" It's important to remember that although CentOS is sponsored by RedHat today, it was separate from RedHat for the first 10 years of its existence. By the time RedHat took ownership of CentOS, CentOS had proven its viability and wide adoption across the industry. Could a community fork eventually be sponsored by Chef? Yes, that is a possibility, but we need to see a fork's viability and adoption over a long period of time before we consider bringing it directly into Chef. If you are interested in this, join the #community-distros channel of the Chef Community Slack.

How will we function as a community?

Changing to a model of 100% Open Source Product code makes both Chef the company and the Chef community stronger. Historically, Chef has consisted of three separate communities (Chef, Inspec, and Habitat) – each with their own Open Source governance processes. With the addition of Automate, our Open Source footprint has grown considerably. Separate governance systems and separate communities are no longer viable – we need one consistent (though not necessarily uniform) system of governance across all of our projects as one community. When someone contributes to a Chef project, they should be able to follow nearly the same process to contribute to an InSpec or Habitat project.

An internal tiger team at Chef (including myself, Robb Kidd, Tim Smith, Ian Henry, Miah Johnson, Jerry Aldrich, and Gina Peers with the wonderful help of community members brought in under NDA) have created "The Chef Book of Open Source" – which defines how we create Open Source software at Chef and how we function as an Open Source Community. The processes defined in this book will be rolled out in stages across Chef's 1000+ Open Source projects over the next several months. What is in the repository today is only the first draft of this book, as a community we will learn and iterate on it as we go along. Want to be involved? Check out the repository, create issues, create pull requests, let's get the discussion going!

We need your help! Let's start the conversation online in the Chef Community Slack and Discourse Forums. Additionally, we will be holding a Community Summit on Monday, May 20th as part of ChefConf. You can register for both ChefConf and Community Summit for a discounted rate of $595 with the code SummitChef19.  

Please join me in building and growing the Chef Community and the future of Chef together!

The post What Does the New Chef Mean for the Community? appeared first on Chef Blog.


Introducing the New Chef: 100% Open, Always


Today, Chef is announcing meaningful changes to the way that we build and distribute our software. Chef has always believed in the power of open source. This philosophy is core to the way that we think about software innovation. There is no better way to build software than in the open in partnership with individuals and companies who use our stack in the real world. And for enterprises and other organizations facing complex challenges, Chef backs up our software by building and supporting distributions for our projects with the resources necessary for these organizations to succeed.

Going forward, we are doubling down on our commitment to OSS development as we extend our support for the needs of enterprise-class transformation. Starting today, we will expand the scope of our open source licensing to include 100% of our software under the Apache 2.0 license (consistent with our existing Chef Infra, Chef InSpec, and Chef Habitat license terms) without any restrictions on the use, distribution or monetization of our source code as long as our trademark policy is respected. We welcome anyone to use and extend our software for any purpose in alignment with the four essential freedoms of Free Software.

We aren't making this change lightly. Over the years we have experimented with and learned from a variety of different open source, community and commercial models, in search of the right balance. We believe that this change, and the way we have made it, best aligns the objectives of our communities with our own business objectives. Now we can focus all of our investment and energy on building the best possible products in the best possible way for our community without having to choose between what is "proprietary" and what is "in the commons." Most importantly, we can do that, with each of you, completely in the open. This means that all of the software that we produce will be created in public repos. It also means that we will open up more of our product development process to the public, including roadmaps, triage and other aspects of our product design and planning process. 

We believe the best software is created by collaborating with the people who use it, so that it encapsulates the goals, expertise and innovations of the diverse Chef community. 

Introducing Chef Enterprise Automation Stack

In addition to our commitment to community-based open source software development, Chef has also deepened our understanding of the needs of our enterprise customers. Enterprises demand a more curated and streamlined way to deploy and update our software and content. They want a relationship with us as the leading experts in DevOps, automation, and Chef products. And, beyond just technical innovations, these companies require assurance in the form of warranties, indemnifications, and support. To fulfill that need, Chef is announcing a new commercial distribution, Chef Enterprise Automation Stack, that will be licensed and tailored exclusively for commercial customers of Chef. We will make our distributions freely available for non-commercial use, experimentation, and individuals so anyone can get started with ease.

This new packaging and distribution approach is the culmination of several years worth of hard work and product development. Chef Enterprise Automation Stack is anchored by Chef Workstation, the quickest way to get a development environment up and running, and Chef Automate as the enterprise observability and management console for the system. Also included is Chef Infra (formerly just Chef) for infrastructure automation, Chef InSpec for security and compliance automation and Chef Habitat for application deployment and orchestration automation.

When you purchase a Chef subscription you get our commitment to be the best in the world at:

  • Enterprise distribution: Tested, hardened software distributions proven in mission critical environments.
  • Enterprise content and software updates: The fastest, most reliable way to get Chef products, updates and supported content. Your deployment will always be up to date with the latest Chef technology.
  • Automation Expertise: Access to the leading experts in automation, DevOps and Chef products.
  • Assurance & Support: Broad-based assurance in the form of 24×7 enterprise class support, warranty and indemnification.

We are committed and focused on being stewards of the Chef community, maintaining the community governance model,  and maintaining the upstream in a way that welcomes all who want to participate in it, however they choose.

What this means

To summarize the changes we are announcing today:

  • Chef will move all of our product software development to an open source model with 100% of our product code available and licensed under Apache 2.0 to better align our business objectives with our community objectives.
  • Chef will produce a new distribution (release), Chef Enterprise Automation Stack, built for commercial users with new terms and conditions of use.
  • Chef will be the best in the world at delivering our enterprise distribution, content, updates, expertise, assurance, and support.
  • Chef is committed to being the best steward of our open source community, welcoming anyone's participation, however they see fit through our community-led governance model.

We understand that there will be more questions surrounding these announcements and have worked to answer as many as possible in these Frequently Asked Questions

These changes mark an exciting moment in Chef's history and define how strongly we believe code is the mechanism for collaboration, trust, and velocity that will enable tomorrow's leading organizations. For us here at Chef, the future is very bright and we look forward to working with customers and our community to create the next generation of amazing, useful software.

Happy Automating!

Barry Crist
CEO

The post Introducing the New Chef: 100% Open, Always appeared first on Chef Blog.



Sent from my iPhone

Thursday, August 30, 2018

Kali Linux 2018.3 Release



----
Kali Linux 2018.3 Release
// Kali Linux

Another edition of Hacker Summer Camp has come and gone. We had a great time meeting our users, new and old, particularly at our Black Hat and DEF CON Dojos, which were led by our great friend @ihackstuff and the rest of the Offensive Security crew. Now that everyone is back home, it's time for our third Kali release of 2018, which is available for immediate download.

Kali 2018.3 brings the kernel up to version 4.17.0 and while 4.17.0 did not introduce many changes, 4.16.0 had a huge number of additions and improvements including more Spectre and Meltdown fixes, improved power management, and better GPU support.

New Tools and Tool Upgrades

Since our last release, we have added a number of new tools to the repositories, including:

  • idb – An iOS research / penetration testing tool
  • gdb-peda – Python Exploit Development Assistance for GDB
  • datasploit – OSINT Framework to perform various recon techniques
  • kerberoast – Kerberos assessment tools

In addition to these new packages, we have also upgraded a number of tools in our repos including aircrack-ng, burpsuite, openvas, wifite, and wpscan.
For the complete list of updates, fixes, and additions, please refer to the Kali Bug Tracker Changelog.

Download Kali Linux 2018.3

If you would like to check out this latest and greatest Kali release, you can find download links for ISOs and Torrents on the Kali Downloads page along with links to the Offensive Security virtual machine and ARM images, which have also been updated to 2018.3. If you already have a Kali installation you're happy with, you can easily upgrade in place as follows.

root@kali:~# apt update && apt -y full-upgrade

Making sure you are up-to-date

To double check your version, first make sure your Kali package repositories are correct.

root@kali:~# cat /etc/apt/sources.list
deb http://http.kali.org/kali kali-rolling main non-free contrib

Then after running apt -y full-upgrade, you may require a reboot before checking:

root@kali:~# grep VERSION /etc/os-release
VERSION="2018.3"
VERSION_ID="2018.3"

If you come across any bugs in Kali, please open a report on our bug tracker. It's more than a little challenging to fix what we don't know about.


----

Read in my feedly


Sent from my iPhone

Are Containers Replacing Virtual Machines?



----
Are Containers Replacing Virtual Machines?
// Docker Blog

With 20,000 partners and attendees converging at VMworld in Las Vegas this week, we often get asked if containers are replacing virtual machines (VMs). Many of our Docker Enterprise customers do run their containers on virtualized infrastructure while others run it on bare metal. Docker provides IT and operators choice on where to run their applications – in a virtual machine, on bare metal, or in the cloud. In this blog we'll provide a few thoughts on the relationship between VMs and containers.

Containers versus Virtual Machines

Point #1: Containers Are More Agile than VMs

At this stage of container maturity, there is very little doubt that containers give both developers and operators more agility. Containers deploy quickly, deliver immutable infrastructure and solve the age-old "works on my machine" problem. They also replace the traditional patching process, allowing organizations to respond to issues faster and making applications easier to maintain.

Point #2: Containers Enable Hybrid and Multi-Cloud Adoption

Once containerized, applications can be deployed on any infrastructure – on virtual machines, on bare metal, and on various public clouds running different hypervisors. Many organizations start with running containers on their virtualized infrastructure and find it easier to then migrate to the cloud without having to change code.

Point #3: Integrate Containers with Your Existing IT Processes

Most enterprise organizations have a mature virtualization environment which includes tooling around backups, monitoring, and automation, and people and processes that have been built around it. By running Docker Enterprise on virtualized infrastructure, organizations can easily integrate containers into their existing practices and get the benefits of points 1 and 2 above.

Running Containers Inside Virtual Machines

Point #4: Containers Save on VM Licensing

Containerized applications share common operating system and software libraries which greatly improves CPU utilization within a VM. This means an organization can reduce the overall number of virtual machines needed to operate their environment and increase the number of applications that can run on a server. Docker Enterprise customers often see 50% increased server consolidation after containerizing which means less hardware costs and savings on VM and OS licensing.

What About Bare Metal?

Just as organizations have reasons for using different servers or different operating systems, there are reasons that some organizations will want to run containers directly on bare metal. This is often due to performance or latency concerns or for licensing and cost reasons.

What About Security?

Containers are inherently secure on their own. Docker containers create isolation layers between applications and between the application and host and reduce the host surface area which protects both the host and the co-located containers by restricting access to the host. Docker containers running on bare-metal have the same high-level restrictions applied to them as they would if running on virtual machines. But Docker containers also pair well with virtualization technologies by protecting the virtual machine itself and providing defense in-depth for the host.

And the Winner Is…

In the end, Docker containers can run inside a virtual machine or on bare metal – the choice is up to you. Just like every other decision in the data center, the path you want to go down should align to your business priorities. Containers work well with virtual machines, but they can also run without them.


Are Containers replacing Virtual Machines? Read more here:
Click To Tweet


To learn more about the relationship between containers and virtual machines, check out these resources:

The post Are Containers Replacing Virtual Machines? appeared first on Docker Blog.


----

Read in my feedly


Sent from my iPhone

The “Depend on Docker” Philosophy at Baker Hughes, a GE Company



----
The "Depend on Docker" Philosophy at Baker Hughes, a GE Company
// Docker Blog

Alex Iankoulski and Arun Subramaniyan co-authored this blog.

BHGE is the world's leading full stream Oil & Gas company on a mission to find better ways to deliver energy to the world. BHGE Digital develops enterprise grade cloud-first SaaS solutions to improve efficiency and reduce non-productive time for the Oil & Gas industry.

In our group, we have developed an analytics-driven product portfolio to enable company-wide digital transformation for our customers. Challenges ranging from predicting the failures of mission-critical industrial assets such as gas turbines to optimizing the conditions of an Electric Submersible Pump (ESP) to increase production, which require building and maintaining sophisticated analytics at scale.

The past few years have taught us this: where there is a whale, there is a way!

We were happy to share our story at DockerCon recently, and wanted to share it here on the Docker blog as well. You can watch the session here:

 

 

We face two major challenges in delivering advanced analytics:

  1. Data silos
    We must handle a multitude of data sources that range from disconnected historical datasets to high speed sensor streams. Industrial data volumes and velocities dwarf even the largest ERP implementations as shown below.

 

2. Analytics silos
Analytics silos consist of complex analytics written over several decades in multiple programming languages (polyglot) and runtime environments. The need to orchestrate these analytics to work together to produce
a valuable outcome makes the challenge doubly hard.

 

Our approach to solving the hardest problems facing the industrial world: combine the power of domain expertise with modern deep learning/machine learning/probabilistic techniques and scalable software practices.

At BHGE, we have developed innovative solutions to accelerate software development in a scalable and sustainable way. The top two questions that our developers in the industrial world face are: How can we make software development easier? How can we make software that can be built, ship, and run on Mac, Windows, Linux, on-prem, and on any cloud platform?

Docker Enterprise allows us to break down silos, reduce complexities, encapsulate dependencies, accelerate development, and scale at will. We use Docker Enterprise for everything from building to testing and deploying software. Other than a few specialized cases, we find very little reason to run anything outside of the Docker container platform.

We gave a live talk as part of the Transformational Stories track at DockerCon 2018, titled "Depend on Docker" where we discussed our journey to accelerate ideas to production software.

In our talk, we cover use cases that need a polyglot infrastructure with highly diverse groups from scientists, aerospace and petroleum engineers to software architects to co-create a production application (you can watch the video or see the slides).

For us, a project qualifies as "depend-on-docker" if the only "external" dependency it needs to go from source to running software is Docker. In the spirit of DockerCon, at the talk we demonstrated and open-sourced our depend-on-docker project, and showed examples of some projects that follow the "depend-on-docker" philosophy, such as semtktree and enigma (follow the links to our Github pages).

In addition to its ease of use, we have made starting your own depend-on-docker project on Linux or Windows really simple. We hope that after you take a look at our GitHub or watch our DockerCon video you will be inspired to build anything you can imagine and convinced that the only external dependency you need is Docker!


Learn @BHGECO approach to break down silos and accelerating software development in a scalable way
Click To Tweet


The post The "Depend on Docker" Philosophy at Baker Hughes, a GE Company appeared first on Docker Blog.


----

Read in my feedly


Sent from my iPhone

Networking KVM for CloudStack – a 2018 revisit for CentOS7 and Ubuntu 18.04



----
Networking KVM for CloudStack – a 2018 revisit for CentOS7 and Ubuntu 18.04
// CloudStack Consultancy & CloudStack...

Introduction

We published the original blog post on KVM networking in 2016 – but in the meantime we have moved on a generation in CentOS and Ubuntu operating systems, and some of the original information is therefore out of date. In this revisit of the original blog post we cover new configuration options for CentOS 7.x as well as Ubuntu 18.04, both of which are now supported hypervisor operating systems in CloudStack 4.11. Ubuntu 18.04 has replaced the legacy networking model with the new Netplan implementation, and this does mean different configuration both for linux bridge setups as well as OpenvSwitch.

KVM hypervisor networking for CloudStack can sometimes be a challenge, considering KVM doesn't quite have the same mature guest networking model found in the likes of VMware vSphere and Citrix XenServer. In this blog post we're looking at the options for networking KVM hosts using bridges and VLANs, and dive a bit deeper into the configuration for these options. Installation of the hypervisor and CloudStack agent is pretty well covered in the CloudStack installation guide, so we'll not spend too much time on this.

Network bridges

On a linux KVM host guest networking is accomplished using network bridges. These are similar to vSwitches on a VMware ESXi host or networks on a XenServer host (in fact networking on a XenServer host is also accomplished using bridges).

A KVM network bridge is a Layer-2 software device which allows traffic to be forwarded between ports internally on the bridge and the physical network uplinks. The traffic flow is controlled by MAC address tables maintained by the bridge itself, which determine which hosts are connected to which bridge port. The bridges allow for traffic segregation using traditional Layer-2 VLANs as well as SDN Layer-3 overlay networks.

KVMnetworking41

Linux bridges vs OpenVswitch

The bridging on a KVM host can be accomplished using traditional linux bridge networking or by adopting the OpenVswitch back end. Traditional linux bridges have been implemented in the linux kernel since version 2.2, and have been maintained through the 2.x and 3.x kernels. Linux bridges provide all the basic Layer-2 networking required for a KVM hypervisor back end, but it lacks some automation options and is configured on a per host basis.

OpenVswitch was developed to address this, and provides additional automation in addition to new networking capabilities like Software Defined Networking (SDN). OpenVswitch allows for centralised control and distribution across physical hypervisor hosts, similar to distributed vSwitches in VMware vSphere. Distributed switch control does require additional controller infrastructure like OpenDaylight, Nicira, VMware NSX, etc. – which we won't cover in this article as it's not a requirement for CloudStack.

It is also worth noting Citrix started using the OpenVswitch backend in XenServer 6.0.

Network configuration overview

For this example we will configure the following networking model, assuming a linux host with four network interfaces which are bonded for resilience. We also assume all switch ports are trunk ports:

  • Network interfaces eth0 + eth1 are bonded as bond0.
  • Network interfaces eth2 + eth3 are bonded as bond1.
  • Bond0 provides the physical uplink for the bridge "cloudbr0". This bridge carries the untagged host network interface / IP address, and will also be used for the VLAN tagged guest networks.
  • Bond1 provides the physical uplink for the bridge "cloudbr1". This bridge handles the VLAN tagged public traffic.

The CloudStack zone networks will then be configured as follows:

  • Management and guest traffic is configured to use KVM traffic label "cloudbr0".
  • Public traffic is configured to use KVM traffic label "cloudbr1".

In addition to the above it's important to remember CloudStack itself requires internal connectivity from the hypervisor host to system VMs (Virtual Routers, SSVM and CPVM) over the link local 169.254.0.0/16 subnet. This is done over a host-only bridge "cloud0", which is created by CloudStack when the host is added to a CloudStack zone.

 

KVMnetworking42

Linux bridge configuration – CentOS

In the following CentOS example we have changed the NIC naming convention back to the legacy "eth0" format rather than the new "eno16777728" format. This is a personal preference – and is generally done to make automation of configuration settings easier. The configuration suggested throughout this blog post can also be implemented using the new NIC naming format.

Across all CentOS versions the "NetworkManager" service is also generally disabled, since this has been found to complicate KVM network configuration and cause unwanted behaviour:

   # systemctl stop NetworkManager  # systemctl disable NetworkManager  

To enable bonding and bridging CentOS 7.x requires the modules installed / loaded:

   # modprobe --first-time bonding  # yum -y install bridge-utils  

If IPv6 isn't required we also add the following lines to /etc/sysctl.conf:

net.ipv6.conf.all.disable_ipv6 = 1   net.ipv6.conf.default.disable_ipv6 = 1  net.ipv6.conf.lo.disable_ipv6 = 1  

In CentOS the linux bridge configuration is done with configuration files in /etc/sysconfig/network-scripts/. Each of the four individual NIC interfaces are configured as follows (eth0 / eth1 / eth2 / eth3 are all configured the same way). Note there is no IP configuration against the NICs themselves – these purely point to the respective bonds:

# vi /etc/sysconfig/network-scripts/ifcfg-eth0  
DEVICE=eth0  NAME=eth0  TYPE=Ethernet  BOOTPROTO=none  ONBOOT=yes  MASTER=bond0  SLAVE=yes  HWADDR=00:0C:12:xx:xx:xx  NM_CONTROLLED=no  

The bond configurations are specified in the equivalent ifcfg-bond scripts and specify bonding options as well as the upstream bridge name. In this case we're just setting a basic active-passive bond (mode=1) with up/down delays of zero and status monitoring every 100ms (miimon=100). Note there are a multitude of bonding options – please refer to the CentOS / RedHat official documentation to tune these to your specific use case.

# vi /etc/sysconfig/network-scripts/ifcfg-bond0  
DEVICE=bond0  NAME=bond0  TYPE=Bond  BRIDGE=cloudbr0  ONBOOT=yes  NM_CONTROLLED=no  BONDING_OPTS="mode=active-backup miimon=100 updelay=0 downdelay=0"  

The same goes for bond1:

# vi /etc/sysconfig/network-scripts/ifcfg-bond1  
DEVICE=bond1  NAME=bond1  TYPE=Bond  BRIDGE=cloudbr1  ONBOOT=yes  NM_CONTROLLED=no  BONDING_OPTS="mode=active-backup miimon=100 updelay=0 downdelay=0"  

Cloudbr0 is configured in the ifcfg-cloudbr0 script. In addition to the bridge configuration we also specify the host IP address, which is tied directly to the bridge since it is on an untagged VLAN:

# vi /etc/sysconfig/network-scripts/ifcfg-cloudbr0  
DEVICE=cloudbr0  ONBOOT=yes  TYPE=Bridge  IPADDR=192.168.100.20  NETMASK=255.255.255.0  GATEWAY=192.168.100.1  NM_CONTROLLED=no  DEFROUTE=yes  IPV4_FAILURE_FATAL=no  IPV6INIT=no  DELAY=0  

Cloudbr1 does not have an IP address configured hence the configuration is simpler:

# vi /etc/sysconfig/network-scripts/ifcfg-cloudbr1  
DEVICE=cloudbr1  ONBOOT=yes  TYPE=Bridge  BOOTPROTO=none  NM_CONTROLLED=no  DELAY=0  DEFROUTE=no  IPV4_FAILURE_FATAL=no  IPV6INIT=no  

Optional tagged interface for storage traffic

If a dedicated VLAN tagged IP interface is required for e.g. storage traffic this can be accomplished by created a VLAN on top of the bond and tying this to a dedicated bridge. In this case we create a new bridge on bond0 using VLAN 100:

# vi /etc/sysconfig/network-scripts/ifcfg-bond.100  
DEVICE=bond0.100  VLAN=yes  BOOTPROTO=none  ONBOOT=yes  TYPE=Unknown  BRIDGE=cloudbr100  

The bridge can now be configured with the desired IP address for storage connectivity:

# vi /etc/sysconfig/network-scripts/ifcfg-cloudbr100  
DEVICE=cloudbr100  ONBOOT=yes  TYPE=Bridge  VLAN=yes  IPADDR=10.0.100.20  NETMASK=255.255.255.0  NM_CONTROLLED=no  DELAY=0  

Internal bridge cloud0

When using linux bridge networking there is no requirement to configure the internal "cloud0" bridge, this is all handled by CloudStack.

Network startup

Note – once all network startup scripts are in place and the network service is restarted you may lose connectivity to the host if there are any configuration errors in the files, hence make sure you have console access to rectify any issues.

To make the configuration live restart the network service:

# systemctl restart network  

To check the bridges use the brctl command:

# brctl show  
bridge name bridge id STP enabled interfaces  cloudbr0 8000.000c29b55932 no bond0  cloudbr1 8000.000c29b45956 no bond1  

The bonds can be checked with:

# cat /proc/net/bonding/bond0  
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)    Bonding Mode: fault-tolerance (active-backup)  Primary Slave: None  Currently Active Slave: eth0  MII Status: up  MII Polling Interval (ms): 100  Up Delay (ms): 0  Down Delay (ms): 0    Slave Interface: eth0  MII Status: up  Speed: 1000 Mbps  Duplex: full  Link Failure Count: 0  Permanent HW addr: 00:0c:xx:xx:xx:xx  Slave queue ID: 0    Slave Interface: eth1  MII Status: up  Speed: 1000 Mbps  Duplex: full  Link Failure Count: 0  Permanent HW addr: 00:0c:xx:xx:xx:xx  Slave queue ID: 0  

Linux bridge configuration – Ubuntu

With the 18.04 "Bionic Beaver" release Ubuntu have retired the legacy way of configuring networking through /etc/network/interfaces in favour of Netplan – https://netplan.io/reference. This changes how networking is configured – although the principles around bridge configuration are the same as in previous Ubuntu versions.

First of all ensure correct hostname and FQDN are set in /etc/hostname and /etc/hosts respectively.

To stop network bridge traffic from traversing IPtables / ARPtables also add the following lines to /etc/sysctl.conf, this prevents bridge traffic from traversing IPtables / ARPtables on the host.

# vi /etc/sysctl.conf  
net.bridge.bridge-nf-call-ip6tables = 0  net.bridge.bridge-nf-call-iptables = 0  net.bridge.bridge-nf-call-arptables = 0  

Ubuntu 18.04 installs the "bridge-utils" and bridge/bonding kernel options by default, and the corresponding modules are also loaded by default, hence there are no requirements to add anything to /etc/modules.

In Ubuntu 18.04 all interface, bond and bridge configuration are configured using cloud-init and the Netplan configuration in /etc/netplan/XX-cloud-init.yaml. Same as for CentOS we are configuring basic active-passive bonds (mode=1) with status monitoring every 100ms (miimon=100), and configuring bridges on top of these. As before the host IP address is tied to cloudbr0:

# vi /etc/netplan/50-cloud-init.yaml  
network:      ethernets:          eth0:              dhcp4: no          eth1:              dhcp4: no          eth2:              dhcp4: no          eth3:              dhcp4: no      bonds:          bond0:              dhcp4: no              interfaces:                  - eth0                  - eth1              parameters:                  mode: active-backup                  primary: eth0          bond1:              dhcp4: no              interfaces:                  - eth2                  - eth3              parameters:                  mode: active-backup                  primary: eth2      bridges:          cloudbr0:              addresses:                  - 192.168.100.20/24              gateway4: 192.168.100.1              nameservers:                  search: [mycloud.local]                  addresses: [192.168.100.5,192.168.100.6]              interfaces:                  - bond0          cloudbr1:              dhcp4: no              interfaces:                  - bond1      version: 2  

Optional tagged interface for storage traffic

To add an options VLAN tagged interface for storage traffic add a VLAN and a new bridge to the above configuration:

# vi /etc/netplan/50-cloud-init.yaml  
    vlans:          bond100:              id: 100              link: bond0              dhcp4: no      bridges:          cloudbr100:              addresses:                 - 10.0.100.20/24              interfaces:                 - bond100  

Internal bridge cloud0

When using linux bridge networking the internal "cloud0" bridge is again handled by CloudStack, i.e. there's no need for specific configuration to be specified for this.

Network startup

Note – once all network startup scripts are in place and the network service is restarted you may lose connectivity to the host if there are any configuration errors in the files, hence make sure you have console access to rectify any issues.

To make the configuration reload Netplan with

# netplan apply  

To check the bridges use the brctl command:

# brctl show  
bridge name	bridge id		STP enabled	interfaces  cloud0		8000.000000000000	no  cloudbr0	8000.52664b74c6a7	no		bond0  cloudbr1	8000.2e13dfd92f96	no		bond1  cloudbr100	8000.02684d6541db	no		bond100  

To check the VLANs and bonds:

# cat /proc/net/vlan/config  VLAN Dev name | VLAN ID  Name-Type: VLAN_NAME_TYPE_RAW_PLUS_VID_NO_PAD  bond100 | 100 | bond0  
# cat /proc/net/bonding/bond0  Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)    Bonding Mode: fault-tolerance (active-backup)  Primary Slave: None  Currently Active Slave: eth1  MII Status: up  MII Polling Interval (ms): 100  Up Delay (ms): 0  Down Delay (ms): 0    Slave Interface: eth1  MII Status: up  Speed: 1000 Mbps  Duplex: full  Link Failure Count: 10  Permanent HW addr: 00:0c:xx:xx:xx:xx  Slave queue ID: 0    Slave Interface: eth0  MII Status: up  Speed: 1000 Mbps  Duplex: full  Link Failure Count: 10  Permanent HW addr: 00:0c:xx:xx:xx:xx  Slave queue ID: 0  

 

OpenVswitch bridge configuration – CentOS

The OpenVswitch version in the standard CentOS repositories is relatively old (version 2.0). To install a newer version either locate and install this from a third party CentOS/Fedora/RedHat repository, alternatively download and compile the packages from the OVS website http://www.openvswitch.org/download/ (notes on how to compile the packages can be found in http://docs.openvswitch.org/en/latest/intro/install/fedora/).

Once packages are available install and enable OVS with

# yum localinstall openvswitch-<version>.rpm  # systemctl start openvswitch  # systemctl enable openvswitch  

In addition to this the bridge module should be blacklisted. Experience has shown that even blacklisting this module does not prevent it from being loaded. To force this set the module install to /bin/false. Please note the CloudStack agent install depends on the bridge module being in place, hence this step should be carried out after agent install.

echo "install bridge /bin/false" > /etc/modprobe.d/bridge-blacklist.conf  

As with linux bridging above the following examples assumes IPv6 has been disabled and legacy ethX network interface names are used. In addition the hostname has been set in /etc/sysconfig/network and /etc/hosts.

Add the initial OVS bridges using the ovs-vsctl toolset:

# ovs-vsctl add-br cloudbr0  # ovs-vsctl add-br cloudbr1  # ovs-vsctl add-bond cloudbr0 bond0 eth0 eth1  # ovs-vsctl add-bond cloudbr1 bond1 eth2 eth3  

This will configure the bridges in the OVS database, but the settings will not be persistent. To make the settings persistent we need to configure the network configuration scripts in /etc/sysconfig/network-scripts/, similar to when using linux bridges.

Each individual network interface has a generic configuration – note there is no reference to bonds at this stage. The following ifcfg-eth script applies to all interfaces:

# vi /etc/sysconfig/network-scripts/ifcfg-eth0  
DEVICE="eth0"  TYPE="Ethernet"  BOOTPROTO="none"  NAME="eth0"  ONBOOT="yes"  NM_CONTROLLED=no  HOTPLUG=no  HWADDR=00:0C:xx:xx:xx:xx  

The bonds reference the interfaces as well as the upstream bridge. In addition the bond configuration specifies the OVS specific settings for the bond (active-backup, no LACP, 100ms status monitoring):

# vi /etc/sysconfig/network-scripts/ifcfg-bond0  
DEVICE=bond0  ONBOOT=yes  DEVICETYPE=ovs  TYPE=OVSBond  OVS_BRIDGE=cloudbr0  BOOTPROTO=none  BOND_IFACES="eth0 eth1"  OVS_OPTIONS="bond_mode=active-backup lacp=off other_config:bond-detect-mode=miimon other_config:bond-miimon-interval=100"  HOTPLUG=no  
# vi /etc/sysconfig/network-scripts/ifcfg-bond1  
DEVICE=bond1  ONBOOT=yes  DEVICETYPE=ovs  TYPE=OVSBond  OVS_BRIDGE=cloudbr1  BOOTPROTO=none  BOND_IFACES="eth2 eth3"  OVS_OPTIONS="bond_mode=active-backup lacp=off other_config:bond-detect-mode=miimon other_config:bond-miimon-interval=100"  HOTPLUG=no  

The bridges are now configured as follows. The host IP address is specified on the untagged cloudbr0 bridge:

# vi /etc/sysconfig/network-scripts/ifcfg-cloudbr0  
DEVICE=cloudbr0  ONBOOT=yes  DEVICETYPE=ovs  TYPE=OVSBridge  BOOTPROTO=static  IPADDR=192.168.100.20  NETMASK=255.255.255.0  GATEWAY=192.168.100.1  HOTPLUG=no  

Cloudbr1 is configured without an IP address:

# vi /etc/sysconfig/network-scripts/ifcfg-cloudbr1  
DEVICE=cloudbr1  ONBOOT=yes  DEVICETYPE=ovs  TYPE=OVSBridge  BOOTPROTO=none  HOTPLUG=no  

Internal bridge cloud0

Under CentOS7.x and CloudStack 4.11 the cloud0 bridge is automatically configured, hence no additional configuration steps required.

Optional tagged interface for storage traffic

If a dedicated VLAN tagged IP interface is required for e.g. storage traffic this is accomplished by creating a VLAN tagged fake bridge on top of one of the cloud bridges. In this case we add it to cloudbr0 with VLAN 100:

# ovs-vsctl add-br cloudbr100 cloudbr0 100  
# vi /etc/sysconfig/network-scripts/ifcfg-cloudbr100  
DEVICE=cloudbr100  ONBOOT=yes  DEVICETYPE=ovs  TYPE=OVSBridge  BOOTPROTO=static  IPADDR=10.0.100.20  NETMASK=255.255.255.0  OVS_OPTIONS="cloudbr0 100"  HOTPLUG=no  

Additional OVS network settings

To finish off the OVS network configuration specify the hostname, gateway and IPv6 settings:

vim /etc/sysconfig/network  
NETWORKING=yes  HOSTNAME=kvmhost1.mylab.local  GATEWAY=192.168.100.1  NETWORKING_IPV6=no  IPV6INIT=no  IPV6_AUTOCONF=no  

VLAN problems when using OVS

Kernel versions older than 3.3 had some issues with VLAN traffic propagating between KVM hosts. This has not been observed in CentOS 7.5 (kernel version 3.10) – however if this issue is encountered look up the OVS VLAN splinter workaround.

Network startup

Note – as mentioned for linux bridge networking – once all network startup scripts are in place and the network service is restarted you may lose connectivity to the host if there are any configuration errors in the files, hence make sure you have console access to rectify any issues.

To make the configuration live restart the network service:

# systemctl restart network  

To check the bridges use the ovs-vsctl command. The following shows the optional cloudbr100 on VLAN 100:

# ovs-vsctl show  
49cba0db-a529-48e3-9f23-4999e27a7f72      Bridge "cloudbr0"          Port "cloudbr0"              Interface "cloudbr0"                  type: internal          Port "cloudbr100"              tag: 100              Interface "cloudbr100"                  type: internal          Port "bond0"              Interface "veth0"              Interface "eth0"      Bridge "cloudbr1"          Port "bond1"              Interface "eth1"              Interface "veth1"          Port "cloudbr1"              Interface "cloudbr1"                  type: internal      Bridge "cloud0"          Port "cloud0"              Interface "cloud0"                  type: internal      ovs_version: "2.9.2"  

The bond status can be checked with the ovs-appctl command:

ovs-appctl bond/show bond0  ---- bond0 ----  bond_mode: active-backup  bond may use recirculation: no, Recirc-ID : -1  bond-hash-basis: 0  updelay: 0 ms  downdelay: 0 ms  lacp_status: off  active slave mac: 00:0c:xx:xx:xx:xx(eth0)    slave eth0: enabled  active slave  may_enable: true    slave eth1: enabled  may_enable: true  

To ensure that only OVS bridges are used also check that linux bridge control returns no bridges:

# brctl show  bridge name	bridge id		STP enabled	interfaces  

As a final note – the CloudStack agent also requires the following two lines added to /etc/cloudstack/agent/agent.properties after install:

network.bridge.type=openvswitch  libvirt.vif.driver=com.cloud.hypervisor.kvm.resource.OvsVifDriver  

OpenVswitch bridge configuration – Ubuntu

As discussed earlier in this blog post Ubuntu 18.04 introduced Netplan as a replacement to the legacy "/etc/network/interfaces" network configuration. Unfortunately Netplan does not support OVS, hence the first challenge is to revert Ubuntu to the legacy configuration method.

To disable Netplan first of all add "netcfg/do_not_use_netplan=true" to the GRUB_CMDLINE_LINUX option in /etc/default/grub. The following example also shows the use of legacy interface names as well as IPv6 being disabled:

GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0 ipv6.disable=1 netcfg/do_not_use_netplan=true"  

Then rebuild GRUB and reboot the server:

grub-mkconfig -o /boot/grub/grub.cfg  

To set the hostname first of all edit "/etc/cloud/cloud.cfg" and set this to preserve the system hostname:

preserve_hostname: true  

Thereafter set the hostname with hostnamectl:

hostnamectl set-hostname --static --transient --pretty <hostname>  

Now remove Netplan, install OVS from the Ubuntu repositories as well the "ifupdown" package to get standard network functionality back:

apt-get purge nplan netplan.io  apt-get install openvswitch-switch  apt-get install ifupdown  

As for CentOS we need to blacklist the bridge module to prevent standard bridges being created. Please note the CloudStack agent install depends on the bridge module being in place, hence this step should be carried out after agent install.

echo "install bridge /bin/false" > /etc/modprobe.d/bridge-blacklist.conf  

To stop network bridge traffic from traversing IPtables / ARPtables also add the following lines to /etc/sysctl.conf:

# vi /etc/sysctl.conf  
net.bridge.bridge-nf-call-ip6tables = 0  net.bridge.bridge-nf-call-iptables = 0  net.bridge.bridge-nf-call-arptables = 0  

Same as for CentOS we first of all add the OVS bridges and bonds from command line using the ovs-vsctl command line tools. In this case we also add the additional tagged fake bridge cloudbr100 on VLAN 100:

# ovs-vsctl add-br cloudbr0  # ovs-vsctl add-br cloudbr1  # ovs-vsctl add-bond cloudbr0 bond0 eth0 eth1 bond_mode=active-backup other_config:bond-detect-mode=miimon other_config:bond-miimon-interval=100  # ovs-vsctl add-bond cloudbr1 bond1 eth2 eth3 bond_mode=active-backup other_config:bond-detect-mode=miimon other_config:bond-miimon-interval=100  # ovs-vsctl add-br cloudbr100 cloudbr0 100  

As for linux bridge all network configuration is applied in "/etc/network/interfaces":

# vi /etc/network/interfaces  
# The loopback network interface  auto lo  iface lo inet loopback    # The primary network interface  iface eth0 inet manual  iface eth1 inet manual  iface eth2 inet manual  iface eth3 inet manual    auto cloudbr0  allow-ovs cloudbr0  iface cloudbr0 inet static    address 192.168.100.20    netmask 255.255.155.0    gateway 192.168.100.100    dns-nameserver 192.168.100.5    ovs_type OVSBridge    ovs_ports bond0    allow-cloudbr0 bond0   iface bond0 inet manual     ovs_bridge cloudbr0     ovs_type OVSBond     ovs_bonds eth0 eth1     ovs_option bond_mode=active-backup other_config:miimon=100    auto cloudbr1  allow-ovs cloudbr1  iface cloudbr1 inet manual    allow-cloudbr1 bond1   iface bond1 inet manual     ovs_bridge cloudbr1     ovs_type OVSBond     ovs_bonds eth2 eth3     ovs_option bond_mode=active-backup other_config:miimon=100  

Network startup

Since Ubuntu 14.04 the bridges have started automatically without any requirement for additional startup scripts. Since OVS uses the same toolset across both CentOS and Ubuntu the same processes as described earlier in this blog post can be utilised.

# ovs-appctl bond/show bond0  # ovs-vsctl show  

To ensure that only OVS bridges are used also check that linux bridge control returns no bridges:

# brctl show  bridge name	bridge id		STP enabled	interfaces  

As mentioned earlier the following also needs added to the /etc/cloudstack/agent/agent.properties file:

network.bridge.type=openvswitch  libvirt.vif.driver=com.cloud.hypervisor.kvm.resource.OvsVifDriver  

Internal bridge cloud0

In Ubuntu there is no requirement to add additional configuration for the internal cloud0 bridge, CloudStack manages this.

Optional tagged interface for storage traffic

Additional VLAN tagged interfaces are again accomplished by creating a VLAN tagged fake bridge on top of one of the cloud bridges. In this case we add it to cloudbr0 with VLAN 100 at the end of the interfaces file:

# ovs-vsctl add-br cloudbr100 cloudbr0 100  
# vi /etc/network/interfaces  
auto cloudbr100  allow-cloudbr0 cloudbr100  iface cloudbr100 inet static    address 10.0.100.20    netmask 255.255.255.0    ovs_type OVSIntPort    ovs_bridge cloudbr0    ovs_options tag=100  

Conclusion

As KVM is becoming more stable and mature, more people are going to start looking at using it rather that the more traditional XenServer or vSphere solutions, and we hope this article will assist in configuring host networking. As always we're happy to receive feedback , so please get in touch with any comments, questions or suggestions.

About The Author

Dag Sonstebo is  a Cloud Architect at ShapeBlue, The Cloud Specialists. Dag spends most of his time designing, implementing and automating IaaS solutions based on Apache CloudStack.

The post Networking KVM for CloudStack – a 2018 revisit for CentOS7 and Ubuntu 18.04 appeared first on The CloudStack Company.


----

Read in my feedly


Sent from my iPhone