What is Bonding & How to Configure Bonding in Linux

what is linux Bonding, configure bonding in linux, bonding modes in linux, linux bonding configuration step by step, bonding in linux step by step

What is Bonding & How to Configure in Linux

Simple definition:
Bonding is nothing but Linux kernel feature that allows to aggregate multiple link interfaces (such as eth0, eth1) into a single virtual link such as bond0. The idea is pretty simple get higher data rates and as well as link failover.

Red Hat described bonding in documents as, Linux allows administrators to bind multiple network interfaces together into a single channel using the bonding kernel module and a special network interface called a channel bonding interface. Channel bonding enables two or more network interfaces to act as one, simultaneously increasing the bandwidth and providing redundancy.

If one physical NIC is down or unplugged, it will automatically move resource to other NIC card. Channel bonding will work with the help of bonding driver in kernel.

How to Create NIC Channel Bonding in RedHat, CentOS and Fedora?

Also watch this demo practically on Youtube!

Step 1: Create a Bonding Channel Configuration File
Linux and other platforms stores network configuration by default in /etc/sysconfig/network-scripts/ directory. First, we need to create a bond0 config file as follows, change the values like IP Address as per your environment.
[root @server ~]# vi /etc/sysconfig/network-scripts/ifcfg-bond0


Save and close the file.

Step #2: Modify eth0 and eth1 config files (Network cards you wish to bring in bonding)

Edit both the configuration files as follows for eth0 and eth1 interfaces. Here MASTER and SLAVE directives are mandatory to specify which bonding channels are we going to use for this network cards if you have multiple bonding in the same servers.

[root @server ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0


[root @server ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth1


Save and close the file.

Step 3: Load bond driver/module

Make sure bonding module is loaded when the channel-bonding interface (bond0) is brought up. You need to modify kernel modules configuration file:

[root @server ~]# vi /etc/modprobe.conf
Append following two lines:

alias bond0 bonding
options bond0 mode=balance-alb miimon=100
Save file and exit to shell prompt. You can learn more about all bounding options in detail below.

Step 4: Test configuration

First, load the bonding module, enter:
[root @server ~]# modprobe bonding
Restart the networking service in order to bring up bond0 interface, enter:
[root @server ~]# service network restart
Make sure everything is working. Type the following cat command to query the current status of Linux kernel bounding driver, enter:
# cat /proc/net/bonding/bond0
Sample outputs:
Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 200
Down Delay (ms): 200
Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:0c:29:c6:be:59
Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:0c:29:c6:be:63

To list all network interfaces, enter:
[root @server ~]# ifconfig
Sample outputs:
bond0     Link encap:Ethernet  HWaddr 00:0C:29:C6:BE:59
 inet addr:  Bcast:  Mask:
 inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
 RX packets:2804 errors:0 dropped:0 overruns:0 frame:0
 TX packets:1879 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:0
 RX bytes:250825 (244.9 KiB)  TX bytes:244683 (238.9 KiB)
eth0      Link encap:Ethernet  HWaddr 00:0C:29:C6:BE:59
 inet addr:  Bcast:  Mask:
 inet6 addr: fe80::20c:29ff:fec6:be59/64 Scope:Link
 RX packets:2809 errors:0 dropped:0 overruns:0 frame:0
 TX packets:1390 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:1000
 RX bytes:251161 (245.2 KiB)  TX bytes:180289 (176.0 KiB)
 Interrupt:11 Base address:0x1400
eth1      Link encap:Ethernet  HWaddr 00:0C:29:C6:BE:59
 inet addr:  Bcast:  Mask:
 inet6 addr: fe80::20c:29ff:fec6:be59/64 Scope:Link
 RX packets:4 errors:0 dropped:0 overruns:0 frame:0
 TX packets:502 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:1000
 RX bytes:258 (258.0 b)  TX bytes:66516 (64.9 KiB)
 Interrupt:10 Base address:0x1480

Most administrators assume that bonding multiple network cards together instantly results in double the bandwidth and high-availability in case a link goes down. Unfortunately, this is not true. Let's start with the most common example, where you have a server with high network load, and wish to allow more than 1Gb/s.

Bonding With 802.3Ad

You connect two interfaces to your switch, enable bonding, and discover half your packets are getting lost. If Linux is configured for 802.3ad link aggregation, the switch must also be told about this. In the Cisco world, this is called an EtherChannel. Once the switch knows those two ports are actually supposed to use 802.3ad, it will load balance the traffic destined for your attached server.

This works great if a large number of network connections from a diverse set of clients are connecting. If, however, the majority of the throughput is coming from a single server, you won't get better than the 1Gb/s port speed. Switches are load balancing based on the source MAC address by default, so if only one connection takes place, it always gets sent down the same link. Many switches support changing of the load balancing algorithm, so if you fall into the single server-to-server category, make sure you allow it to round-robin the Ethernet frames.

Generic Bonding

There are multiple modes you can set in Linux, and the most common "generic" one is bonding-alb. This mode works effectively in most situations, without needing to configure a switch or trick anything else. It does, however, require that your network interface support changing the MAC address on the fly. This mode works well "generically" because it is constantly swapping MAC addresses to trick the other end (be it a switch or another connected host) into sending traffic across both links. This can wreak havoc on a Cisco network with port security enabled, but in general it's a quick and dirty way to get it working.

Channel Bonding Modes

Channel Bonding modes can be broken into three categories: generic, those that require switch support, and failover-only.

The failover-only mode is active-backup: One port is active until the link fails, then the other takes over the MAC and becomes active.

Modes that require switch support are:

    balance-rr: Frames are transmitted in a round-robin fashion without hashing, to truly load balance.
    802.3ad: This mode is the official standard for link aggregation, and includes many configurable options for how to balance the traffic.
    balance-xor: Traffic is hashed and balanced according to the receiver on the other end. This mode is also available as part of 802.3ad.

Note that modes requiring switch support can be run back-to-back with crossover cables between two server as well. This is especially useful, for example, when using DRBD to replicate two partitions.

Generic modes include:

    broadcast: This mode is not really link aggregation - it simply broadcasts all traffic out both interfaces, which can be useful when sending data to partitioned broadcast domains for high availability (see below). If using broadcast mode on a single network, switch support is recommended.
    balance-tlb: Outgoing traffic is load balanced, but incoming only uses a single interface. The driver will change the MAC address on the NIC when sending, but incoming always remains the same.
    balance-alb: Both sending and receiving frames are load balanced using the change MAC address trick.

Hope you enjoyed, please share and leave your comments. what is Bonding, configure linux bonding, configure bonding in linux, bonding modes in linux, linux bonding configuration step by step, bonding in linux step by step, ethernet bonding linux, Teaming Multiple Network Interfaces, Understanding NIC Bonding with Linux, Create NIC Channel Bonding in RedHat/CentOS/Fedora, Configuring Network Interface Bonding, Link Aggregation and High Availability with Bonding, linux bond ethernet ports, bonding modes, ethernet bonding modes, bonding balance-xor, balance-alb, bonding in redhat linux, ethernet bonding in linux, linux bonding modes, network bonding in linux, bonding in rhel5, multipathing in linux, Channel Bonding Interfaces
July 17, 2015

Post a Comment


Contact Form


Email *

Message *

Powered by Blogger.
Javascript DisablePlease Enable Javascript To See All Widget