Performance Difference Between NFS Versions

The Network File System (NFS) protocol has been around since the ’80s which is quite ancient in the realm of technology, and yet is still used widely. How has the performance of NFS changed over the years between the different versions?

Here we are going to compare the performance of NFS versions 2, 3, 4, and 4.1 to get idea of how things have progressed over time, the results are quite surprising.

While there are lots of other notable changes between these versions, our testing is only concerned with the performance aspects of sequential read/write and random read/write speeds.

To help give you an idea of how far NFS has come, below are the RFCs which indicate the month and year of each protocol.

With over 20 years of difference between version 2 and version 4.1 of NFS we would definitely expect to see performance improvements, let’s take a look!

The test environment

Out test environment contains four servers, two running CentOS 6.6 and two running CentOS 7.1.1503. The CentOS 6.6 servers were used to benchmark NFS version 2, one acting as the NFS server and the other as the NFS client. The CentOS 7 servers were also set up to act as an NFS server and NFS client, they were used for the NFS version 3, 4 and 4.1 tests.

The NFS version 2 results should therefore be taken with a grain of salt as they were run on a different operating system which still had support for version 2. This was done to help illustrate how far things have come since version 2, as CentOS 7 and above no longer support NFS version 2.

The NFS clients are mounting the mount points from the NFS server with the version specified in each test to ensure that they are mounting as the correct version for the particular test. Once mounted ‘nfsstat -m’ was run to verify that the mount point was mounted with the correct version specified for the test.

The NFS servers had dedicated 40GB disks attached which were used for the NFS mount, this disk was not being used for anything else except the NFS benchmark. A secondary disk was used to rule out any actions that the running OS may be performing during the benchmark.

The benchmarking process

The benchmarks were run using IOzone, a file system benchmarking tool that I found had a good reputation for NFS benchmarking.

In this instance we are going to benchmark, sequential read, sequential write, random read, and random write with IOzone, these are defined as below from the IOzone documentation.

  • Read: This test measures the performance of reading an existing file.
  • Write: This test measures the performance of writing a new file. When a new file is written not only does the data need to be stored but also the overhead information for keeping track of where the data is located on the storage media. This overhead is called the “metadata” It consists of the directory information, the space allocation and any other data associated with a file that is not part of the data contained in the file. It is normal for the initial write performance to be lower than the performance of re-writing a file due to this overhead information.
  • Random Read: This test measures the performance of reading a file with accesses being made to random locations within the file. The performance of a system under this type of activity can be impacted by several factors such as: Size of operating system’s cache, number of disks, seek latencies, and others.
  • Random Write: This test measures the performance of writing a file with accesses being made to random locations within the file. Again the performance of a system under this type of activity can be impacted by several factors such as: Size of operating system’s cache, number of disks, seek latencies, and others.

As the documentation was not clear if the read/write tests were sequential, I sent a message to one of the IOzone developers who was able to confirm that those tests are indeed sequential.

Below is the command that was run over the NFS mount, which was mounted to /mnt

root@client current]# ./iozone -Racz -g 1G -i 0 -i 1 -i 2 -U /mnt/ -f /mnt/testfile -b /root/output.xls

Results

Below are four images outlining the results as a series of graphs for sequential read/write and random read/write speeds, click them to open as full size.

NFS Sequential Read

NFS Sequential Read

NFS Sequential Write

NFS Sequential Write

NFS Random Read

NFS Random Read

NFS Random Write

NFS Random Write

Additionally if you would like to see the raw data, you can download it here:

Performance differences and comparison

Lets start with the obvious, in all tests NFS version 2 is coming last by a significant amount. This should be expected given the age of the protocol, I do not believe that this is due to the difference between CentOS 6.6 and CentOS 7.1.1503 operating systems in use, as this is the only other difference between the NFS version 2 test compared with the others.

In the sequential read tests, NFS versions 3 and 4 appear to be more stable with a larger amount of the graph in the dark blue 120-140 MBps area. This is interesting as NFS version 4.1 on the other hand has a few more dips, however it does seem to have slightly better results with small file sizes.

The sequential writes appear almost identical between NFS versions 3, 4 and 4.1. Versions 4 and 4.1 appear to perform slightly better with the smaller file sizes however, with 4 even outperforming 4.1 around the 512K – 2MB file size point.

Random reads are very similar between versions 3, 4 and 4.1, there isn’t much more to say as the results are so close. Random writes are also pretty similar throughout this test and even with the sequential write test where all tests perform similar, with versions 4 and 4.1 performing slightly better with smaller file types.

Conclusion

While NFS version 2 is obviously performing the poorest as expected, in general the performance of NFS versions 3, 4 and 4.1 are fairly similar, at least in these tests. I expected 4.1 with pNFS to clearly dominate the results however this was not the case, at least not under these particular synthetic workloads.

These tests seem to indicate that after version 3 there isn’t much to gain in practical performance for sequential read/write and random read/write workloads. Of course, there are plenty of other differences between NFS versions so using the newest available version is generally recommended, however from a pure performance standpoint after version 3 there does not seem to be significant performance differences under these particular test scenarios.

  1. Good information, thanks for those test results. One question for my own curiosity, did you run the write tests with direct=1 or did you use client side caching? I’ve done a few sequential write tests with NFS v3 and 4 and saw little difference with direct=1. NFSv3 though seemed to outperform v4 when I used direct=0.

  2. Isn’t v4.1’s strength supposed to be in parallel I/O performance? Did your testing including reading/writing many files concurrently and compare the overall throughput?

Leave a Comment

NOTE - You can use these HTML tags and attributes:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>