In Linux it is possible to aggregate multiple network links together into a single logical link which can either increase network throughput or redundancy. For example we can assign an IP address to a group of two network interfaces to double our throughput, or reserve one interface for backup purposes so if the first one fails we can fail over.
Here we’re going to cover how to create and configure a network team with two different network interfaces.
Studying for your RHCE certification? Checkout our RHCE video course over at Udemy which is 20% off when you use the code ROOTUSER.
Link aggregation has been done in the past with network bonding, however as of RHEL 7 teaming is now the preferred method, though you can use either option. Teaming is more efficient and can mostly do what bonding can do but with additional features, see here for a full list of the differences.
Install the Teaming Daemon
In order to use network teaming you must first install the teamd package, this is not normally installed by default.
yum install teamd -y
Configuring teaming with nmcli
The nmcli tool is a command line tool for working with the Network Manager.
First we can display the devices that are currently in place.
[root@ ~]# nmcli con show NAME UUID TYPE DEVICE Wired connection 2 03973d6f-ae98-4d0b-8780-90572571713d 802-3-ethernet eno50332208 Wired connection 1 497fe8ae-5217-4e2e-bf55-151b2ce61b50 802-3-ethernet eno33554984 eno16777736 0dbee9e5-1e7e-4c88-822b-869cfc9e2d13 802-3-ethernet eno16777736
In the virtual machine that I am testing with, I have 3 network interfaces where eno16777736 is the primary interface that I am managing the VM with via SSH, this interface will not be used in the team. Wired connection1/2 which correspond to devices eno33554984 and eno50332208 respectively will be used in the team.
Next we create a team called ‘team0’.
[root@ ~]# nmcli con add type team con-name team0 Connection 'team0' (e6118c6d-3d63-449d-b6b6-5e61a44e5e44) successfully added.
Now if we show the connections again we should see team0 listed as device ‘nm-team’.
[root@ ~]# nmcli con show NAME UUID TYPE DEVICE Wired connection 2 03973d6f-ae98-4d0b-8780-90572571713d 802-3-ethernet eno50332208 Wired connection 1 497fe8ae-5217-4e2e-bf55-151b2ce61b50 802-3-ethernet eno33554984 eno16777736 0dbee9e5-1e7e-4c88-822b-869cfc9e2d13 802-3-ethernet eno16777736 team0 e6118c6d-3d63-449d-b6b6-5e61a44e5e44 team nm-team
This team is not yet doing anything, so we next add in our two interfaces.
[root@ ~]# nmcli con add type team-slave ifname eno33554984 master team0 Connection 'team-slave-eno33554984' (d72bbe43-eaa9-4220-ba58-cd322f74653e) successfully added. [root@ ~]# nmcli con add type team-slave ifname eno50332208 master team0 Connection 'team-slave-eno50332208' (898ef4eb-65cd-45b1-be93-3afe600547e2) successfully added.
Now if we show the connections again we should see the two team-slaves listed.
[root@ ~]# nmcli con show NAME UUID TYPE DEVICE Wired connection 2 03973d6f-ae98-4d0b-8780-90572571713d 802-3-ethernet eno50332208 Wired connection 1 497fe8ae-5217-4e2e-bf55-151b2ce61b50 802-3-ethernet eno33554984 eno16777736 0dbee9e5-1e7e-4c88-822b-869cfc9e2d13 802-3-ethernet eno16777736 team-slave-eno50332208 898ef4eb-65cd-45b1-be93-3afe600547e2 802-3-ethernet -- team-slave-eno33554984 d72bbe43-eaa9-4220-ba58-cd322f74653e 802-3-ethernet -- team0 e6118c6d-3d63-449d-b6b6-5e61a44e5e44 team nm-team
This automatically creates the following configuration files for the team:
[root@ ~]# cat /etc/sysconfig/network-scripts/ifcfg-team0 DEVICE=nm-team DEVICETYPE=Team BOOTPROTO=dhcp DEFROUTE=yes PEERDNS=yes PEERROUTES=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_PEERDNS=yes IPV6_PEERROUTES=yes IPV6_FAILURE_FATAL=no NAME=team0 UUID=c794ce57-2879-4426-9632-50cf05f8d5b5 ONBOOT=yes [root@ ~]# cat /etc/sysconfig/network-scripts/ifcfg-team-slave-eno33554984 NAME=team-slave-eno33554984 UUID=9b5d1511-43ee-4184-b20d-540c2820bb6a DEVICE=eno33554984 ONBOOT=yes TEAM_MASTER=c794ce57-2879-4426-9632-50cf05f8d5b5 DEVICETYPE=TeamPort [root@ ~]# cat /etc/sysconfig/network-scripts/ifcfg-team-slave-eno50332208 NAME=team-slave-eno50332208 UUID=9f441c0f-07fc-430b-8bb1-9e913c05d7b3 DEVICE=eno50332208 ONBOOT=yes TEAM_MASTER=c794ce57-2879-4426-9632-50cf05f8d5b5 DEVICETYPE=TeamPort
Note: If you edit any of these files manually, you will need to run the ‘nmcli con reload’ command so that network manager reads the configuration changes.
The team has now been set with default configuration, it will use DHCP by default however we can manually specify IP address configuration if required with the below commands.
nmcli con mod team0 ipv4.method manual nmcli con mod team0 ipv4.addresses 192.168.1.50/24 nmcli con mod team0 ipv4.gateway 192.168.1.254
Enabling the team
At this point the documentation generally says you can run ‘nmcli con up team0’ to bring the team up, however this will not work as we first need to bring up the ports – that is our interfaces within the team.
[root@ ~]# nmcli connection up team-slave-eno50332208 Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/4) [root@ ~]# nmcli connection up team-slave-eno33554984 Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/5)
After bringing a port up the team should become active, in my test the team is using DHCP so it was able to successfully get an IP address as shown below after the first command above was issued. It is important to note that the team interface is only started when one of its port interfaces is started however this does not automatically start all other port interfaces in the team. Starting the team interface alone does not automatically start all of the port interfaces. If you mess up the ordering you can run a ‘systemctl restart network’ and it should bring up the ports and team correctly and give the same result as below.
[root@ ~]# ip a ... 3: eno33554984:mtu 1500 qdisc pfifo_fast master nm-team state UP qlen 1000 link/ether 00:0c:29:ac:13:21 brd ff:ff:ff:ff:ff:ff 4: eno50332208: mtu 1500 qdisc pfifo_fast master nm-team state UP qlen 1000 link/ether 00:0c:29:ac:13:21 brd ff:ff:ff:ff:ff:ff 8: nm-team: mtu 1500 qdisc noqueue state UP link/ether 00:0c:29:ac:13:21 brd ff:ff:ff:ff:ff:ff inet 192.168.1.50/24 brd 192.168.1.255 scope global dynamic nm-team valid_lft 86310sec preferred_lft 86310sec inet6 fe80::20c:29ff:feac:1321/64 scope link valid_lft forever preferred_lft forever
The team is now up and running! I performed some basic testing by running a constant ping to the team at 192.168.1.50 which was responding with <1ms, after disabling one of the network interfaces on the virtual machine that was part of the team the response instantly increased to 13ms and then returned back to <1ms. This shows that the connectivity was still working as expected with only one of the two interfaces in the team available.
Modifying the team
By default our team is using the round robin runner, as shown below.
[root@ ~]# teamdctl nm-team state setup: runner: roundrobin ports: eno33554984 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up eno50332208 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up
This shows our two ports which are the interfaces we added into the team, the runner is round robin by default. A runner is essentially a different method that the team can use to handle the traffic, the different runners available are listed below.
- roundrobin: This is the default that we are using, it simply sends packets to all interfaces in the team in a round robin manner, that is one at a time followed by the next interface.
- broadcast: All traffic is sent over all ports.
- activebackup: One interface is in use while the other is set aside as a backup, the link is monitored for changes and will use the failover link if needed.
- loadbalance: Traffic is balanced over all interfaces based on Tx traffic, equal load should be shared over available interfaces..
You can specify the runner that you want to use for the team when you create the team with the below command, it’s similar to the command we used to create the initial team with the exception of the config part added to the end.
nmcli con add type team con-name team0 config '{ "runner": {"name": "broadcast"}}'
Alternatively you can modify the current team by simply editing the configuration file that was created in /etc/sysconfig/network-scripts/ifcfg-team0 and add a line such as the one below.
TEAM_CONFIG='{"runner": {"name": "activebackup"}, "link_watch": {"name": "ethtool"}}'
You don’t have to memorize the syntax used for the runner, simply check the /usr/share/doc/teamd-X.XX/example_configs/ directory for example configurations. You can copy and paste the runners used into your nmcli command.
After this I ran a ‘systemctl restart network’ to apply the change, you can then confirm that this is the runner that the team is using:
[root@ ~]# teamdctl nm-team state setup: runner: broadcast
Further documentation and help
If you have any problems, run the below command to access the manual page containing various nmcli examples and then perform a search for ‘team’ for additional information.
man 5 nmcli-examples
The below command may also be helpful to get some useful debugging information out from a team.
teamnl nm-team options
Other ways to configure teaming
The nmcli command is not the only way to configure network teaming, however it does give the best understanding of how all of the different components work.
Teaming can also be set up non persistently with teamd, or it can also be set up with a text user interface via ‘nmtui’ or through the graphical user interface network settings.
Summary
We have successfully aggregated multiple network links into one logical team link, allowing us to increase performance or redundancy of our network connection.
This post is part of our Red Hat Certified Engineer (RHCE) exam study guide series. For more RHCE related posts and information check out our full RHCE study guide.
Thank-you for this tutorial.
I have been trying to understand what the objective “Use network teaming or bonding to configure aggregated network links between two Red Hat Enterprise Linux systems” means? What is meant by “two Red Hat Enterprise systems” with regard to this? They are somehow configured to route between each other? e.g four interfaces(two on each host) working in tandem with one IP address shared between them? I did that and it seems to work. I thought I’d get an error about duplicate IP adresses. Instead when one machine goes down, after a ~10 second delay, the other starts responding to pings in its place. Arping shows mac address changes too. I don’t understand this objective.
Hi, no problem! My understanding is that there are two RHEL machines that are connected together somehow, I don’t think the exact means is too important for the objective. From what you’ve done and with what is covered in this post, I think it sounds like you are able to complete what they are asking.
You can pass multiple options to the “nmcli con mod team0” command, no need to call it three times :) For example, one long line:
# nmcli con mod team0 ipv4.method manual ipv4.addresses 192.168.1.50/24 ipv4.gateway 192.168.1.254
In my case it had to be passed in one go.
[root@rhce2 network-scripts]# nmcli con mod team0 ipv4.method manual
Error: Failed to modify connection ‘team0’: ipv4.addresses: this property cannot be empty for ‘method=manual’
[root@rhce2 network-scripts]# nmcli con mod team0 ipv4.method manual ipv4.address 192.148.10.222/24
[root@rhce2 network-scripts]#
You are right but breaking it down step by step is very informative without need to memorize logs syntax.
Hello. I noticed a serious problem that needs your attention: How to configure automatic recovery of a failed port in RHEL7.0 teaming ? I successfully configured teaming by following the above procedure, in a RHEL 7.0 server. The failover from port1 to port2 (from main to backup) was also successful by just applying the command mentioned in your procedure, but when I tried to failover again from port2 to port1 (from backup to main), it was not possible. I found port1 still in a failed state and I had to recover it up manually. This is a big problem because if the backup also fails, the network will remain in a failed state and never recovers. So is there a solution to this problem: a command to (try to) immediately recover (stimulate) a failed port after a failover, so that it becomes soon ready to switch back to main (active) again if/when necessary. This means the config must always try keeping both ports UP so that failover never fails. Thanks.
I waste a lot of time while trying to force to work this teaming with LACP and found solution at least. I hope it will be useful for somebody else.
Just need to add following line to /etc/sysconfig/network-scripts/ifcfg-team0:
TEAM_CONFIG=”{ \”runner\”: {\”name\”: \”lacp\”, \”active\”: true, \”fast_rate\”: true }}”
Without this setting one port was active but second standby always.
With this thing it’s almost the best teaming manual and configuration.
Thank you.
why can’t I keep both the “device” and the “slave” (which uses the device, right?) active at the same time? whenever I activate the device, the slave goes down and vice-versa (as displayed in “nmcli con show”. when I restart the network I seem to get a random mix of devices and slaves that have been activated. have I missed something in the device or slave setup definition that effects activation order and coexistence?
I’m back… ;)
Okay, so if I don’t get hung up on forcing the slave to be in the UP state everything seems to work okay: I can ping public web sites, connect to my gateway, etc. However, ‘nmcli con show’ shows the devices and the team as green with assigned device names – but not the slaves. Likewise, ‘ip a’ shows the devices but not the slaves, and notes that the team state is DOWN. Even so, I can ping my team’s IP, and can even modify that IP with ‘nmcli con mod ipv4.addresses’ and restart the network and the old IP is no longer there but the new one is – so it is definitely serving up that IP (even though the slaves and team appear down in all tools).
So is that whole “activate the slaves to get the team UP” only a “first-time” thing? Can I be confident that this is (and will continue to be) working correctly?
Crusty Veterans are encouraged to reply. Thanks!
Hi there, thanks for the easy to follow doc. I’ve successfully created a “team” device which combines 2 ethernet devs using LACP. Works well sofar. Next I need to make available for access the “team” device to another computers network (same subnet). So I figured I should juat add a static route via “route-team” file. With that done and NetworkManager restarted the “team” device does not start anymore without giving a reason.
entry in the “route-team”:
10.101.4.212/32 via 10.101.4.132 dev team
any comments welcome
regards – Rainer
Hello and thanks for your great tutorials,
I failed to pass rhce last week and as I’m reviewing the commands and configs I used I’d like to ask you a question:
I have two IPs in different subnets, no gateway and no network mask given. I want these to communicate. Does that mean it just need them to ping each other after setting them up or I need to set static route or something else?
Thank you
If you had two network interfaces in different subnets you would need some sort of route for them to communicate, whether that be ping or some other TCP port (assuming there is the requirement for them to communicate with each other). Assuming no router is available, I’d look at configuring static routes that became available at system boot.
Thanks alot for your reply, each VM has two NICs for the team but the addresses given for the team connection are in a different subnet than of those of the VM, so you say I should add a static route in resolv im the vms for them to see the team addr subnet?
Other than that, maybe both teams need a bridge to communicate?
Thanks alot
Oh, are the two addresses in the same subnet or different subnets? If the two IP addresses for both team devices are in the same subnet, and the network interfaces of both teams are on the same network / vlan then they should be able to talk directly without any routes. I’d suggest creating some test VMs and giving it a go :)
The addresses that are given to be used for the team0 master are in the same subnet but this subnet is different than the VMs subnet.
That should be fine to allow both team interfaces to talk to each other then without the need for any routes, assuming both network interfaces of the VMs are on the same network.
Thank you! I’ll make some extra tests before I reexam just in case.
Best regards!
Hello,
you forgot to add ipv4.dns XXX.XXX.XXX.XXX
Thank you
U guys made it very simple to understand
👌