Wednesday, August 30, 2017

5 host network configurations for MLAG



----
5 host network configurations for MLAG
// Cumulus Networks Blog

Host network configurations for MultiChassis Link Aggregation (MLAG, also referred to as dual-attach or 'high availability') can vary from host OS to host OS, even amongst Linux distributions. The most recommended and robust method is to use Link Aggregation Control Protocol (LACP), which is supported on most host operating systems natively. Host bonds or bonding refers to a variety of bonding methods, but for the purpose of this article it will refer to LACP bonds. The terms etherchannel, link aggregation group (LAG), NIC teaming, port-channel and bond can be used interchangeably to refer to LACP depending on the vendor's nomenclature. For the sake of simplicity, we will just call it bonds or bonding. This post will take your through the steps for host network configurations for MLAG across five different operating systems.

Why LACP? LACP is a IEEE standard that has been available since 2000 known as 802.3ad. This makes a highly interoperable standards approach to bonding that can work across many network vendors and host operating systems. LACP is superior to static configuration (also referred to bond-mode ON) because there is a control protocol keeping the bond active. This means failover is predictable and automatic. This is also helpful because even if a layer 1 issue occurs where one side of a connection still thinks the link is up, the logical link will be brought down by LACP. Static configuration is also prone to be misconfigured on the first try because the only mechanism they have to detect if the bond is configured correctly is the physical state of the interface. This is not helpful when you are connected to the wrong switch or a media converter. With that in mind, let's discuss the host network configurations for MLAG.

In this post we will cover multiple host operating systems:

  • Debian Linux
  • Ubuntu Linux
  • Red Hat Enterprise Linux (RHEL) and CentOS
  • Windows 2016 Server
  • VMware VSphere

This reference diagram will work for all configuration examples – just imagine the host os has been installed on server01.

Debian Linux

Debian Linux uses ifupdown for flat-file configuration. This configuration will also work on Cumulus Linux since the newer ifupdown2 is also backwards compatible with ifupdown. Look at a comparison of ifupdown vs ifupdown2 to learn more. It is possible (and recommended) to install ifupdown2 on both Debian and Ubuntu.

Debian Configuration (7.0 wheezy and 8.0 jessie and later)
##########################
auto lacpbond
iface lacpbond inet static
        address 192.168.1.101/24
        bond-slaves uplink1 uplink2
        bond-mode 802.3ad
        bond-miimon 100
        bond-lacp-rate 1
        bond-min-links 1
        bond-xmit-hash-policy layer3+4

auto lacpbond.100
iface lacpbond.100 inet static
        address 192.168.100.101/24

auto lacpbond.101
iface lacpbond.101 inet static
        address 192.168.101.101/24

auto lacpbond.102
iface lacpbond.102 inet static
        address 192.168.102.101/24

 

Ubuntu Linux

Ubuntu is similar to Debian except that the slave interfaces must be configured as type "inet manual" and assigned to their respective bond using the "bond-master" keyword.

auto uplink2
iface uplink2 inet manual
        bond-master lacpbond

auto uplink
iface uplink inet manual
        bond-master lacpbond

auto lacpbond
iface lacpbond inet static
        address 192.168.1.101/24
        bond-mode 802.3ad
        bond-miimon 100
        bond-lacp-rate 1
        bond-xmit-hash-policy layer3+4
        bond-slaves none
        vlan-raw-device lacpbond

auto lacpbond.100
iface lacpbond.100 inet static
        address 192.168.100.101/24
        vlan-raw-device lacpbond

auto lacpbond.101
iface lacpbond.101 inet static
        address 192.168.101.101/24
        vlan-raw-device lacpbond

auto lacpbond.102
iface lacpbond.102 inet static
        address 192.168.102.101/24
        vlan-raw-device lacpbond

 

Red Hat Enterprise Linux (RHEL) and CentOS

RHEL and CentOS have a variety of ways to configure networking. This includes nmtui (NetworkManager Text User Interface) and nmcli (NetworkManager command Line Tool) using the Linux CLI and ifcfg flat-files, and even using a GUI (GNOME). The nmtui and GUI are not really options for data centers as they are specifically made for human point and click interaction and don't work with automation. The nmcli method is automatable, however, nmcli responds the same for commands when run twice in a row, which makes return codes for automation difficult to troubleshoot. For example, Ansible would report "changed" every time a playbook is run regardless if it actually changed something or not. This makes nmcli difficult to use in CI/CD (Continuous Integration / Continuous Delivery).

Cumulus Networks highly recommends using the standard CLI method by breaking the configuration files into separate ifcfg files. This method is common, battle hardened, easy to automate from a variety of tools, and well documented.

First, configure the physical links. On this example server the physical links are named uplink1 and uplink2.

/etc/sysconfig/network-scripts/ifcfg-uplink1

DEVICE=uplink1
NAME=lacpbond-slave
TYPE=Ethernet
BOOTPROTO=none
ONBOOT=yes
MASTER=lacpbond
SLAVE=yes

/etc/sysconfig/network-scripts/ifcfg-uplink1

DEVICE=uplink2
NAME=lacpbond-slave
TYPE=Ethernet
BOOTPROTO=none
ONBOOT=yes
MASTER=lacpbond
SLAVE=yes

Next, configure the bond logical interface:

/etc/sysconfig/network-scripts/ifcfg-lacpbond

DEVICE=lacpbond
NAME=lacpbond
TYPE=Bond
BONDING_MASTER=yes
ONBOOT=yes
BOOTPROTO=none
BONDING_OPTS="mode=4 miimon=100 lacp_rate=1 xmit_hash_policy=layer3+4"

Then, configure the tagged interfaces (for each VLAN). Each of these will again be a separate file.

/etc/sysconfig/network-scripts/ifcfg-lacpbond.100

DEVICE=lacpbond.100
IPADDR=192.168.100.101
PREFIX=24
ONBOOT=yes
BOOTPROTO=none
VLAN=yes

This would continue for however many VLANs you desired to configure.
 

Windows Server 2016

Windows 2016 server has the ability to use the Windows GUI or use PowerShell. For this document, we will only cover PowerShell.

 

PS C:\> New-NetLbfoTeam Team1 uplink1,uplink2 -TeamingMode LACP
‑LoadBalancingAlgorithm TransportPorts

 

TransportPorts, as described by Microsoft, is also called Address Hash via their GUI. This mode creates hash on the TCP/UDP ports and source and destination IP addresses so they will match the hash on the ToR (Top of Rack) switch.
 

VMware vSphere 6.+

VMware requires the VSphere Distributed Switches 5.1 or Vmware 5.5 or VMware 6.0. This can be configured two ways through the Vsphere Web Client.
 
Interested in learning even more about LACP and host network configurations for MLAG? Then you should check out our 3 part blog series on LACP! This series covers design choices, how MLAG interacts with the host, and the sharing state between host and upstream network. Head over to our blog if you'd like to become an LACP scholar.

The post 5 host network configurations for MLAG appeared first on Cumulus Networks Blog.


----

Read in my feedly


Sent from my iPhone

No comments:

Post a Comment