Session Trunking for NFS available in RHEL-8

This Article is taken from Is NFS session trunking available in RHEL?

Session trunking, whereby one can have multiple TCP connections to the same NFS server with the same IP address is provided by nconnect. This feature is available in RHEL 8

RHSA-2020:4431 for the package(s) kernel-4.18.0-240.el8 or later.
RHBA-2020:4530 for the package(s) nfs-utils-2.3.3-35.el8 libnfsidmap-2.3.3-35.el8 or later.

You can configure on the client side up to 16 connection

[root@nfs-client ~]# mount -t nfs /nfs-share -o nosharecache,nconnect=16

You can see by using the command

[root@nfs-client ~]# cat /proc/self/mountstats
RPC iostats version: 1.1  p/v: 100003/4 (nfs)
    xprt:   tcp 991 0 2 0 39 13 13 0 13 0 2 0 0
    xprt:   tcp 798 0 2 0 39 6 6 0 6 0 2 0 0
    xprt:   tcp 768 0 2 0 39 6 6 0 6 0 2 0 0
    xprt:   tcp 1013 0 2 0 39 4 4 0 4 0 2 0 0
    xprt:   tcp 828 0 2 0 39 4 4 0 4 0 2 0 0
    xprt:   tcp 702 0 2 0 39 2 2 0 2 0 2 0 0
    xprt:   tcp 783 0 2 0 39 2 2 0 2 0 2 0 0
    xprt:   tcp 858 0 2 0 39 2 2 0 2 0 2 0 0

Someone recorded multiple performance increase when used on Pure Storage acting as a NFS Server. at Use nconnect to effortlessly increase NFS performance

Pix Taken from Use nconnect to effortlessly increase NFS performance


  1. Is NFS session trunking available in RHEL?
  2. Use nconnect to effortlessly increase NFS performance
  3. Explanation of NFS mount options: nconnect=, nosharetransport, sharetransport=

Rapidfile Toolskit 1.0

RapidFile Toolkit 1.0 (formerly, PureTools) provides fast client-side alternatives for common Linux commands like ls, du, find, chown, chmod, rm and cp which has been optimized for the high level of concurrency supported by FlashBlade NFS. You will be


# sudo rpm -U rapidfile-1.0.0-beta.5/rapidfile-1.0.0-beta.5-Linux.rpm


Disk Usages:

% pdu -sh /scratch/user1

Copy Files:

% pcp -r -p -u /scratch/user1/ /backup/user1/

Remove Files:

% prm -rv /scratch/user1/

Change Ownership:

% pchown -Rv user1:usergroup /scratch/user1

Change Permission:

% pchmod -Rv 755 /scratch/user1


  1. RapidFile Toolkit for FlashBlade (PureTools)

QLC support in Pure Flash Array

Read an interesting article by Pure Storage on QLC support into Pure Flash Array//C which is challenging or at least coming close to hybrid (SSD + Spinning Disk) Storage Solution. The technology used. The article is titled “Hybrid Arrays – Not Dead Yet, But … QLC Flash Is Here

According to the article,

Why QLC?

It all comes down to how many bits of data can be stored in each tiny little cell on a flash chip. Most enterprise flash arrays currently use triple-level cell (TLC) chips that store three bits in each cell. A newer generation, quad-level cell (QLC) can store—you guessed it—four bits per cell. 

Better still, it’s more economical to manufacture QLC flash chips than TLC flash. Sounds great, except for two big problems: 

  • QLC flash has far lower endurance, typically limited to fewer than 1,000 program/erase cycles. This is one-tenth the endurance of TLC flash.
  • QLC flash is less performant, with higher latency and lower throughput than TLC. 

Because of these technical challenges, there are only a few QLC-based storage arrays on the market. And the only way those arrays can attain enterprise-grade performance is by overprovisioning (which decreases the amount of usable storage) or by adding a persistent memory tier (which significantly increases cost).


How did Pure Storage integrate?

So what has Pure done differently? Crucially, the hardware and software engineers who built QLC support into FlashArray//C built on Pure’s unique vertically integrated architecture. Instead of using flash solid-state drive (SSD) modules like other storage vendors, Pure’s proprietary DirectFlash® modules connect raw flash directly to the FlashArray™ storage via NVMe, which reduces latency and increases throughput. And unlike traditional SSDs that use a flash controller or flash translation layer, DirectFlash is primarily raw flash. The flash translation takes place in the software.

This architecture allows the Purity operating environment to schedule and place data on the storage media with extreme precision, overcoming the technical challenges that have constrained other vendors.


For more information do read “Hybrid Arrays – Not Dead Yet, But … QLC Flash Is Here