I’ve taken most of this information from the article “NFS for Clusters” and “Linux NFS and Automounter Administration” by Erez Zadok
Profiling Write Operation at NFS
$ time dd if=/dev/zero of=testfile bs=4k count=16384 16384+0 records in 16384+0 records out 67108864 bytes (67 MB) copied, 0.518172 s, 130 MB/s real 0m0.529s user 0m0.016s sys 0m0.500s
time = time a simple command or give resource usage
dd = convert and copy a file
if = read from FILE instead of stdin
of = write to FILE instead of stdin
bs = read and write BYTES bytes at a time
count = BLOCKS
According to Wikipedia /dev/zero is a special file that provides as many null characters (ASCII NUL, 0x00) as are read from it. One of the typical uses is to provide a character stream for overwriting information. Another might be to generate a clean file of a certain size. Like /dev/null, /dev/zero acts as a source and sink for data. All writes to /dev/zero succeed with no other effects (the same as for /dev/null, although /dev/null is the more commonly used data sink); all reads on /dev/zero return as many NULs as characters requested.
Profiling Read Operation for NFS
When profiling reads instead of writes, call umount and mount to flush caches, or the read might be instantaneous and give the impression of quick read
$ cd / $ umount /mnt/shareddrive $ mount /mnt/shareddrive $ cd /mnt/shareddrive $ dd if=testfile of=/dev/null bs=4k count=16384
Here after unmounting and mounting again the shared NFS, the testfile which exists on the shared drive is read and writen to /dev/null.
According to the article “NFS for Clusters“, if more than 3% of calls are retransmitted, then there are problems with the network or NFS server.
Look for NFS failures on a shared disk server with
$ nfsstat -s or $ nfsstat -o rpc