How to unmount NFS mount that fails to unmount with ‘device is busy’

If you are attempting to unmount a NFS command like

# mount -t nfs -o remount /mnt/nfs 
# umount /mnt/nfs 
# umount -f /mnt/nfs 
# umount -l /mnt/nfs 
# umount -lf /mnt/nfs

Identify which processes tied to the mount need to be killed by using lsof and fuser:

# lsof | grep /mnt/nfs

lsof command above identifies the PID of the processes associated with the /mnt/nfs share. Kill any processes locking the stale mount.

Try to force umount again after the processes as been killed

# umount -lf

References:

  1. How to unmount a stale NFS mount that fails to unmount with ‘device is busy’ after network disconnectivity?

How to increase the number of threads created by the NFS daemon for CENTOS 7

Taken from How to increase the number of threads created by the NFS daemon in RHEL 4, 5, 6 and 7?

In case of a NFS server with a high load, it may be advisable to increase the number of the threads created during the nfsd server start up.

Edit the following line in /etc/nfs.conf

% vim /etc/nfs.conf
#[nfsd]
# debug=0
threads=64
# host=
# port=0
# grace-time=90
# lease-time=90
# udp=y
# tcp=y

Testing whether it works….

% cat /proc/net/rpc/nfsd

According to the RH, “The first number is the total number of NFS server threads started. The second number indicates whether at any time all of the threads were running at once. The remaining numbers are a thread count time histogram.”

th 64 0 2.610 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000

mount.nfs: requested NFS version or transport protocol is not supported

If you have encountered issues like

mount.nfs: requested NFS version or transport protocol is not supported

OR

mount.nfs4: Protocol not supported

To resolve this

Mount with NFS version 3 (with 4 verbose flags)

% mount -vvvv -t nfs -o vers=3 nfs-server:/share /mnt/nfs

References:

  1. Error “mount.nfs: requested NFS version or transport protocol is not supported” when attempting to mount an NFS share on Red Hat Enterprise Linux 6

showmount fails with clnt_create: RPC: Program not registered from a NFS client communicating with a NetApp filer

1. Assuming this is your mount command

mount -t nfs -o vers=3 XXX.XXX.XXX.XXX:/myserver/nfs /myclient/nfs

 

2. And if you are using showmount command from an NFS client and the following are observed

clnt_create: RPC: Program not registered

 

3. You have to access the NetApp Storage to check the NFS Protocol is enabled. I’m using the Netapp OnCommand System Manager

4. Check that the NFS Client can mount.

showmount --no-headers -e nfs_server
/ (everyone)

References:

  1. showmount fails with clnt_create: RPC: Program not registered when executed from a RHEL6 NFS client communicating with a NetApp filer

Using nfsstat to troubleshoot NFS performance issues

The write-up is taken from RedHat Using nfsstat and nfsiostat to troubleshoot NFS performance issues on Linux

NFS relies on the existing network infrastructure, any glitches on the network may affect the performance of the connection. One of the tools that can be used is nfsstat

% yum install nfs-utils

The nfsstat command

The nfsstat command displays statistical information about the NFS and Remote Procedure Call (RPC) interfaces to the kernel.

On Server Side,

% nfsstat -s
Server rpc stats:
calls badcalls badclnt badauth xdrcall
107310012 0 0 0 0

The most important field to check is the badcalls, which represents the total number of calls rejected by the RPC layer. When the badcalls is greater than 0, than the underlying network needs to be checked, as there might be latency.

 

On NFS Client Side,

% nfsstat -c
Client rpc stats:
calls retrans authrefrsh
23158 0 23172

Client nfs v3:
null getattr setattr lookup access readlink
0 0% 7237 31% 7 0% 1443 6% 7874 34% 11 0%
read write create mkdir symlink mknod
578 2% 4548 19% 585 2% 1 0% 0 0% 0 0%
remove rmdir rename link readdir readdirplus
0 0% 0 0% 0 0% 0 0% 0 0% 51 0%
fsstat fsinfo pathconf commit
25 0% 10 0% 5 0% 781 3%

The client is doing well as it has relatively few retransmission requests. If you are encountering excessive retransmissions, you may want to adjust data transfer buffer sizes, which are specified by the mount command options rsize and size.

 

Check for dropped packets

Check dropped packet by running the following command on both the server and the client:

% nfsstat -o net
Client packet stats:
packets udp tcp tcpconn
0 0 0 0

NFS mount errors with “clnt_create: RPC: Unknown host” for CentOS 6

When attempting to mount CentOS 6, my mount fails with

clnt_create: RPC: Unknown host

Diagnostic:

If we do a more thorough diagnostic, this is the issue

# showmount -e  
clnt_create: RPC: Unknown host  
# showmount -e localhost  
Export list for localhost:  
/export/my_data \*

Resolution:

Taken from Redhat Site

Implement forward and reverse lookups (A records and CNAME records) in DNS and have the system point towards the DNS servers. Implement for both IPv4 and IPv6. If unable to resolve DNS issues, change the /etc/hosts file from this:

Change from

::1          localhost localhost.localdomain localhost6 localhost6.localdomain6

To

::1          machine_hostname localhost localhost.localdomain localhost6 localhost6.localdomain6

Restart the NFS service and check on the showmount -e localhost and showmount -e and attempt to mount the share.

# service nfs restart  
# showmount -e localhost  
# showmount -e

Installing NFS4 on CentOS 5 and 6

For more information on NFS4 and difference between NFS3 and NFS4, do look at A brief look at the difference between NFSv3 and NFSv4.

This tutorial is a guide on how to install NFSv4 on CentOS 5 and 6

Step1: Installing the packages

# yum install nfs-utils nfs4-acl-tools portmap

Some facts about the tools above as given from yum info.

nfs-utils –  The nfs-utils package provides a daemon for the kernel NFS server and related tools, which provides a much higher level of performance than the traditional Linux NFS server used by most users.

This package also contains the showmount program.  Showmount queries the mount daemon on a remote host for information about the NFS (Network File System) server on the remote host. For example, showmount can display the clients which are mounted on that host. This package also contains the mount.nfs and umount.nfs program.

nfs4-acl-toolsThis package contains commandline and GUI ACL utilities for the Linux NFSv4 client.

portmap – The portmapper program is a security tool which prevents theft of NIS (YP), NFS and other sensitive information via the portmapper. A portmapper manages RPC connections, which are used by protocols like NFS and NIS.

The portmap package should be installed on any machine which acts as a server for protocols using RPC.

Step 2: Exports the File System from the NFS Server (Similar to NFSv3 except with the inclusion of fsid=0)

/home           192.168.1.0/24(rw,no_root_squash,sync,no_subtree_check,fsid=0)
/install        192.168.1.0/24(rw,no_root_squash,sync,no_subtree_check,fsid=1)

The fsid=0 and fsid=1 option provides a number to use in identifying the filesystem. This number must be different for all the filesystems in /etc/exports that use the fsid option. This option is only necessary for exporting filesystems that reside on a block device with a minor number above 255.one directory can be exported with each fsid option.

Exports the file system

# exportfs -av

Restart the NFS service

# service nfs start

If you are supporting NFSv3,  you have to start portmap as NFSv3 requires them. As such, NFSv4 does not need to interact with rpcbind[1], rpc.lockd, and rpc.statd daemons. For more information see Fedora Chapter 9.  Network File System (NFS) – How it works for a more in-depth understanding.

# service portmap restart

Step 2: Client Mapping

# mount -t nfs4 192.168.1.1:/ /home

For other information,

  1. NFS4 Client unable to mount Server NFS4 file
  2. A brief look at the difference between NFSv3 and NFSv4

A brief look at the difference between NFSv3 and NFSv4

There are a few interesting differences between NFSv3 and NFSv4. Comparison of  NFSv3 and NFSv4 is quite hard to obtain and the information is referenced from NFS Version 4 Open Source Project.

From a File System perspective, there are

Export Management

  1. In NFSv3, client must rely on auxiliary protocol, the mount protocol to request a list of server’s exports and obtain root filehandle of a given export. It is fed into the NFS protocol proper once the root filehandle is obtained.
  2. In NFSv4 uses the virtual file system to present the server’s export and associated root filehandles to the client.
  3. NFSv4 defines a special operation to retrieve the Root filehandle and the NFS Server presents the appearance to the client that each export is just a directory in the pseudofs
  4. NFSv4 Pseudo File System is supposed to provide maximum flexibility. Exports Pathname on servers can be changed transparently to clients.

State

  1. NFSv3 is stateless. In other words if the server reboots, the clients can pick up where it left off. No state has been lost.
  2. NFSv3 is typically used with NLM, an auxiliary protocol for file locking. NLM is stateful that the server LOCKD keeps track of locks.
  3. In NFSv4, locking operations are part of the protocol
  4. NFSv4 servers keep track of open files and delegations

Blocking Locks

  1. NFSv3 rely on NLM. Basically, Client process is put to “sleep”. When a callback is received from the server, client process is granted the lock.
  2. For NFSv4, the client to put to sleep, but will poll the server periodically for the lock.
  3. The benefits of the mechanism is that there is one-way reachability from client to server. But it may be less efficient.

Network File System ( NFS ) in High Performance Networks (White Papers)

This article “Network File System ( NFS ) in High Performance Networks” by Carnegic Mellon is very interesting article about NFS Performance. Do take a look. Here is a summary of their fundings

  1. For point-to-point throughput, IP over InfiniBand (Connected Mode) is comparable to a native InfiniBand.
  2. When a disk is a bottleneck, NFS can benefit from neither IPoIB nor RMDA
  3. When a disk is not a bottleneck, NFS benefits significantly from both IPoIB and RDMA. RDMA is better than IPoIB by ~20%
  4. As the number of concurrent read operations increases, aggregate throughputs achieved for both IPoIB and RDMA significantly improve with no disadvantage for IPoIB