How To Configure Network Teaming In Linux

In Linux it is possible to aggregate multiple network links together into a single logical link which can either increase network throughput or redundancy. For example we can assign an IP address to a group of two network interfaces to double our throughput, or reserve one interface for backup purposes so if the first one fails we can fail over.

Here we’re going to cover how to create and configure a network team with two different network interfaces.

Link aggregation has been done in the past with network bonding, however as of RHEL 7 teaming is now the preferred method, though you can use either option. Teaming is more efficient and can mostly do what bonding can do but with additional features, see here for a full list of the differences.

Install the Teaming Daemon

In order to use network teaming you must first install the teamd package, this is not normally installed by default.

yum install teamd -y

Configuring teaming with nmcli

The nmcli tool is a command line tool for working with the Network Manager.

First we can display the devices that are currently in place.

[root@ ~]# nmcli con show
NAME                UUID                                  TYPE            DEVICE
Wired connection 2  03973d6f-ae98-4d0b-8780-90572571713d  802-3-ethernet  eno50332208
Wired connection 1  497fe8ae-5217-4e2e-bf55-151b2ce61b50  802-3-ethernet  eno33554984
eno16777736         0dbee9e5-1e7e-4c88-822b-869cfc9e2d13  802-3-ethernet  eno16777736

In the virtual machine that I am testing with, I have 3 network interfaces where eno16777736 is the primary interface that I am managing the VM with via SSH, this interface will not be used in the team. Wired connection1/2 which correspond to devices eno33554984 and eno50332208 respectively will be used in the team.

Next we create a team called ‘team0’.

[root@ ~]# nmcli con add type team con-name team0
Connection 'team0' (e6118c6d-3d63-449d-b6b6-5e61a44e5e44) successfully added.

Now if we show the connections again we should see team0 listed as device ‘nm-team’.

[root@ ~]# nmcli con show
NAME                UUID                                  TYPE            DEVICE
Wired connection 2  03973d6f-ae98-4d0b-8780-90572571713d  802-3-ethernet  eno50332208
Wired connection 1  497fe8ae-5217-4e2e-bf55-151b2ce61b50  802-3-ethernet  eno33554984
eno16777736         0dbee9e5-1e7e-4c88-822b-869cfc9e2d13  802-3-ethernet  eno16777736
team0               e6118c6d-3d63-449d-b6b6-5e61a44e5e44  team            nm-team

This team is not yet doing anything, so we next add in our two interfaces.

[root@ ~]# nmcli con add type team-slave ifname eno33554984 master team0
Connection 'team-slave-eno33554984' (d72bbe43-eaa9-4220-ba58-cd322f74653e) successfully added.

[root@ ~]# nmcli con add type team-slave ifname eno50332208 master team0
Connection 'team-slave-eno50332208' (898ef4eb-65cd-45b1-be93-3afe600547e2) successfully added.

Now if we show the connections again we should see the two team-slaves listed.

[root@ ~]# nmcli con show
NAME                    UUID                                  TYPE            DEVICE
Wired connection 2      03973d6f-ae98-4d0b-8780-90572571713d  802-3-ethernet  eno50332208
Wired connection 1      497fe8ae-5217-4e2e-bf55-151b2ce61b50  802-3-ethernet  eno33554984
eno16777736             0dbee9e5-1e7e-4c88-822b-869cfc9e2d13  802-3-ethernet  eno16777736
team-slave-eno50332208  898ef4eb-65cd-45b1-be93-3afe600547e2  802-3-ethernet  --
team-slave-eno33554984  d72bbe43-eaa9-4220-ba58-cd322f74653e  802-3-ethernet  --
team0                   e6118c6d-3d63-449d-b6b6-5e61a44e5e44  team            nm-team

This automatically creates the following configuration files for the team:

[root@ ~]# cat /etc/sysconfig/network-scripts/ifcfg-team0
DEVICE=nm-team
DEVICETYPE=Team
BOOTPROTO=dhcp
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=team0
UUID=c794ce57-2879-4426-9632-50cf05f8d5b5
ONBOOT=yes

[root@ ~]# cat /etc/sysconfig/network-scripts/ifcfg-team-slave-eno33554984
NAME=team-slave-eno33554984
UUID=9b5d1511-43ee-4184-b20d-540c2820bb6a
DEVICE=eno33554984
ONBOOT=yes
TEAM_MASTER=c794ce57-2879-4426-9632-50cf05f8d5b5
DEVICETYPE=TeamPort

[root@ ~]# cat /etc/sysconfig/network-scripts/ifcfg-team-slave-eno50332208
NAME=team-slave-eno50332208
UUID=9f441c0f-07fc-430b-8bb1-9e913c05d7b3
DEVICE=eno50332208
ONBOOT=yes
TEAM_MASTER=c794ce57-2879-4426-9632-50cf05f8d5b5
DEVICETYPE=TeamPort

Note: If you edit any of these files manually, you will need to run the ‘nmcli con reload’ command so that network manager reads the configuration changes.

The team has now been set with default configuration, it will use DHCP by default however we can manually specify IP address configuration if required with the below commands.

nmcli con mod team0 ipv4.method manual
nmcli con mod team0 ipv4.addresses 192.168.1.50/24
nmcli con mod team0 ipv4.gateway 192.168.1.254

Enabling the team

At this point the documentation generally says you can run ‘nmcli con up team0’ to bring the team up, however this will not work as we first need to bring up the ports – that is our interfaces within the team.

[root@ ~]# nmcli connection up team-slave-eno50332208
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/4)

[root@ ~]# nmcli connection up team-slave-eno33554984
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/5)

After bringing a port up the team should become active, in my test the team is using DHCP so it was able to successfully get an IP address as shown below after the first command above was issued. It is important to note that the team interface is only started when one of its port interfaces is started however this does not automatically start all other port interfaces in the team. Starting the team interface alone does not automatically start all of the port interfaces. If you mess up the ordering you can run a ‘systemctl restart network’ and it should bring up the ports and team correctly and give the same result as below.

[root@ ~]# ip a
...
3: eno33554984:  mtu 1500 qdisc pfifo_fast master nm-team state UP qlen 1000
    link/ether 00:0c:29:ac:13:21 brd ff:ff:ff:ff:ff:ff
4: eno50332208:  mtu 1500 qdisc pfifo_fast master nm-team state UP qlen 1000
    link/ether 00:0c:29:ac:13:21 brd ff:ff:ff:ff:ff:ff
8: nm-team:  mtu 1500 qdisc noqueue state UP
    link/ether 00:0c:29:ac:13:21 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.50/24 brd 192.168.1.255 scope global dynamic nm-team
       valid_lft 86310sec preferred_lft 86310sec
    inet6 fe80::20c:29ff:feac:1321/64 scope link
       valid_lft forever preferred_lft forever

The team is now up and running! I performed some basic testing by running a constant ping to the team at 192.168.1.50 which was responding with <1ms, after disabling one of the network interfaces on the virtual machine that was part of the team the response instantly increased to 13ms and then returned back to <1ms. This shows that the connectivity was still working as expected with only one of the two interfaces in the team available.

Modifying the team

By default our team is using the round robin runner, as shown below.

[root@ ~]# teamdctl nm-team state
setup:
  runner: roundrobin
ports:
  eno33554984
    link watches:
      link summary: up
      instance[link_watch_0]:
        name: ethtool
        link: up
  eno50332208
    link watches:
      link summary: up
      instance[link_watch_0]:
        name: ethtool
        link: up

This shows our two ports which are the interfaces we added into the team, the runner is round robin by default. A runner is essentially a different method that the team can use to handle the traffic, the different runners available are listed below.

  • roundrobin: This is the default that we are using, it simply sends packets to all interfaces in the team in a round robin manner, that is one at a time followed by the next interface.
  • broadcast: All traffic is sent over all ports.
  • activebackup: One interface is in use while the other is set aside as a backup, the link is monitored for changes and will use the failover link if needed.
  • loadbalance: Traffic is balanced over all interfaces based on Tx traffic, equal load should be shared over available interfaces..

You can specify the runner that you want to use for the team when you create the team with the below command, it’s similar to the command we used to create the initial team with the exception of the config part added to the end.

nmcli con add type team con-name team0 config '{ "runner": {"name":
"broadcast"}}'

Alternatively you can modify the current team by simply editing the configuration file that was created in /etc/sysconfig/network-scripts/ifcfg-team0 and add a line such as the one below.

TEAM_CONFIG='{"runner": {"name": "activebackup"}, "link_watch": {"name": "ethtool"}}'

You don’t have to memorize the syntax used for the runner, simply check the /usr/share/doc/teamd-X.XX/example_configs/ directory for example configurations. You can copy and paste the runners used into your nmcli command.

After this I ran a ‘systemctl restart network’ to apply the change, you can then confirm that this is the runner that the team is using:

[root@ ~]# teamdctl nm-team state
setup:
  runner: broadcast

Further documentation and help

If you have any problems, run the below command to access the manual page containing various nmcli examples and then perform a search for ‘team’ for additional information.

man 5 nmcli-examples

The below command may also be helpful to get some useful debugging information out from a team.

teamnl nm-team options

Other ways to configure teaming

The nmcli command is not the only way to configure network teaming, however it does give the best understanding of how all of the different components work.

Teaming can also be set up non persistently with teamd, or it can also be set up with a text user interface via ‘nmtui’ or through the graphical user interface network settings.

Summary

We have successfully aggregated multiple network links into one logical team link, allowing us to increase performance or redundancy of our network connection.


This post is part of our Red Hat Certified Engineer (RHCE) exam study guide series. For more RHCE related posts and information check out our full RHCE study guide.

  1. Thank-you for this tutorial.

    I have been trying to understand what the objective “Use network teaming or bonding to configure aggregated network links between two Red Hat Enterprise Linux systems” means? What is meant by “two Red Hat Enterprise systems” with regard to this? They are somehow configured to route between each other? e.g four interfaces(two on each host) working in tandem with one IP address shared between them? I did that and it seems to work. I thought I’d get an error about duplicate IP adresses. Instead when one machine goes down, after a ~10 second delay, the other starts responding to pings in its place. Arping shows mac address changes too. I don’t understand this objective.

    • Hi, no problem! My understanding is that there are two RHEL machines that are connected together somehow, I don’t think the exact means is too important for the objective. From what you’ve done and with what is covered in this post, I think it sounds like you are able to complete what they are asking.

  2. You can pass multiple options to the “nmcli con mod team0” command, no need to call it three times :) For example, one long line:

    # nmcli con mod team0 ipv4.method manual ipv4.addresses 192.168.1.50/24 ipv4.gateway 192.168.1.254

  3. You are right but breaking it down step by step is very informative without need to memorize logs syntax.

Leave a Comment

NOTE - You can use these HTML tags and attributes:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>