Forcing NIC to operate at Full Duplex and 100Mb using Ethtool

ethtool is used for querying settings of an ethernet device and changing them. For more information on ethtool, you go to Using ethtool to check and change Ethernet Card Settings and Forcing NIC to operate at Full Duplex using Ethtool

To use ethtool to set NIC to operate at Full Duplex and 100Mb and autonegotiate off, you can use the following commands

# ethtool -s eth0 speed 100 duplex full autoneg off

To force the NIC to use full duplex, 100Mbps, and autonegotiate off and make it permanent, you can put this in /etc/sysconfig/network-scripts/ifcfg-eth0

ETHTOOL_OPTS="speed 100 duplex full autoneg off"

To verify that the settings is correct,do

# ethtool eth0 (or eth1 depending the NIC you are using)
Advertisement

Configure TCP for faster connections and transfers

On a default Linux Box, the TCP settings may not be optimise for “bigger” available network bandwidth connections and transfer available for 100MB+. Currently, most TCP settings are optimise for 10MB settings. I’m relying on the article from Linux Tweaking from SpeedGuide.net to configure the TCP

The TCP Parameters to be configured are

/proc/sys/net/core/rmem_max – Maximum TCP Receive Window
/proc/sys/net/core/wmem_max – Maximum TCP Send Window
/proc/sys/net/ipv4/tcp_timestamps – timestamps (RFC 1323) add 12 bytes to the TCP header
/proc/sys/net/ipv4/tcp_sack – tcp selective acknowledgements.
/proc/sys/net/ipv4/tcp_window_scaling – support for large TCP Windows (RFC 1323). Needs to be set to 1 if the Max TCP Window is over 65535.

There are 2 methods to apply the changes.

Methods 1: Editing the /proc/sys/net/core/. However, do note that the settings will be lost on reboot.

echo 256960 > /proc/sys/net/core/rmem_default
echo 256960 > /proc/sys/net/core/rmem_max
echo 256960 > /proc/sys/net/core/wmem_default
echo 256960 > /proc/sys/net/core/wmem_max
echo 0 > /proc/sys/net/ipv4/tcp_timestamps
echo 1 > /proc/sys/net/ipv4/tcp_sack
echo 1 > /proc/sys/net/ipv4/tcp_window_scaling

Method 2: For a more permanent settings, you have to configure /etc/sysctl.conf.

net.core.rmem_default = 256960
net.core.rmem_max = 256960
net.core.wmem_default = 256960
net.core.wmem_max = 256960
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_sack = 1
net.ipv4.tcp_window_scaling = 1

Execute sysctl -p to make these new settings take effect.

Configuring NFS Server for Performance

How the NFS Server exports the file system plays an important part in the overall performance of NFS. Do read for  the NFS Client Recommended Configuration in this same blog.

  1. Tuning NFSD Server Daemon for Performance
  2. Dealing with Overflow of Fragmented Packets
  3. Configure TCP for faster connections and transfers
  4. Turning Off Autonegotiation of NICs and Hubs (optional)
  5. Tuning NFS Server exports file for performance (/etc/exports)
  6. Network Design consideration for NFS

For more in-depth information,  Optimizing NFS Performance

Configuring NFS Client for Performance

How the NFS Client mounts the file system do have some impacts on the performance of the NAS boxes. There are some NFS mount options that we can use. I’m assuming we are using NFSv3.

  1. Use the tcp option when possible.  UDP performance is better when the networked is light, but TCP option is more efficient  when the system load is heavy. When using TCP, a single dropped packet can be retransmitted, without the retransmission of the entire RPC request, resulting in better performance on lossy networks. In addition, TCP will handle network speed differences better than UDP, due to the underlying flow control at the network level.
  2. Use the hard option to continue to retry the NFS operation and not return an error to the user application performing the I/O
  3. rsize and wsize specify the size of the chunks of data that the client and server pass back and forth to each other. If no rsize and wsize options are specified, the default varies by which version of NFS we are using. To maximise the read / write, use rsize=32768, wsize=32768.
  4. By default, Everytime a client reads from a file, the server must update the server’s inode time stamp for most recently accessed time. This will lead to a performance penalty. Performance should improve by adding the noatime flag
  5. For heavily loaded server, you may want to increase the timeout to 2 seconds, timeo=20 to avoid overloading the server.
  6. To have more reliability when the server is heavily loaded, retrans=10 so that the server retry the RPC commands 10 times instead of the default 3
  7. Caching Parameters
    1. acregmin=n.  The  minimum time (in seconds) that the NFS client caches attributes of a regular file before it requests fresh attribute information from a server. The default is 3 seconds. No need to tweak the parameter
    2. acregmax=n. The  maximum time (in seconds) that the NFS client caches attributes of a regular file before it requests fresh attribute information from a server. The default is 60. It is recommended to tweak the parameter to 10 ie agremax=10
    3. acdirmin=n. The  minimum  time  (in  seconds) that the NFS client caches attributes of a directory before it requests fresh attribute information from a server. Recommended acdirmin=0
    4. acdirmax=n. The  maximum  time  (in  seconds) that the NFS client caches attributes of a directory before it requests fresh attribute information from a server. Recommended acdirmax=0
  8. Last but not least. There is no one configuration to fit all the possible application usages or file system usage. It take a lot of tweaking and testing to find the final sweet-spot.

Putting it all together, we have….

nas:/home    /home     nfs   hard,intr,tcp,rsize=32768,wsize=32768,noatime,timeo=20,acdirmin=0,acdirmax=0,acregmax=10     
 0  0

Note:

  • intr refer that the NFS operation can be interuppted.
  • First 0 refer that the dump program does not need to backup the file system.
  • the 2nd 0 refer that the fsck program does not need to check the fils system at boot time

Much of the information, I have written are found on

  1. Optimising NFS Performance (nfs.sourceforge.net)
  2. NFS for Clusters (billharlan.com)
  3. Why are changes made on an NFS share on my Red Hat Enterprise Linux 5 client not immediately visible to other NFS clients? (redhat.com)
  4. Problems with Linux NFS (smorgasbork.com)