Introducing Xeon Scalable Platforms with Lightbits Labs and FI-TS.

Storage Performance Basics for Deep Learning

This is an interesting write-up from James Mauro from Nvidia on Storage Performance Basics for Deep Learning.

The complexity of the workloads plus the volume of data required to feed deep-learning training creates a challenging performance environment. Deep learning workloads cut across a broad array of data sources (images, binary data, etc), imposing different disk IO load attributes, depending on the model and a myriad of parameters and variables.”

For Further Reads… Do take a look at

Rapidfile Toolskit 1.0

RapidFile Toolkit 1.0 (formerly, PureTools) provides fast client-side alternatives for common Linux commands like ls, du, find, chown, chmod, rm and cp which has been optimized for the high level of concurrency supported by FlashBlade NFS. You will be


# sudo rpm -U rapidfile-1.0.0-beta.5/rapidfile-1.0.0-beta.5-Linux.rpm


Disk Usages:

% pdu -sh /scratch/user1

Copy Files:

% pcp -r -p -u /scratch/user1/ /backup/user1/

Remove Files:

% prm -rv /scratch/user1/

Change Ownership:

% pchown -Rv user1:usergroup /scratch/user1

Change Permission:

% pchmod -Rv 755 /scratch/user1


  1. RapidFile Toolkit for FlashBlade (PureTools)

Edge Networking and Data: How to Build Edge Clouds with Low Latency Storage

Join Lightbits Labs and Intel in this joint webinar on deploying compute, storage and networking in edge data centers. You’ll learn why compute and storage is being pushed to the edge (vs. central cloud), what benefits this produces and the challenges that must be overcome for successful deployment and operation.

Do register and you can view the recording.


QLC support in Pure Flash Array

Read an interesting article by Pure Storage on QLC support into Pure Flash Array//C which is challenging or at least coming close to hybrid (SSD + Spinning Disk) Storage Solution. The technology used. The article is titled “Hybrid Arrays – Not Dead Yet, But … QLC Flash Is Here

According to the article,

Why QLC?

It all comes down to how many bits of data can be stored in each tiny little cell on a flash chip. Most enterprise flash arrays currently use triple-level cell (TLC) chips that store three bits in each cell. A newer generation, quad-level cell (QLC) can store—you guessed it—four bits per cell. 

Better still, it’s more economical to manufacture QLC flash chips than TLC flash. Sounds great, except for two big problems: 

  • QLC flash has far lower endurance, typically limited to fewer than 1,000 program/erase cycles. This is one-tenth the endurance of TLC flash.
  • QLC flash is less performant, with higher latency and lower throughput than TLC. 

Because of these technical challenges, there are only a few QLC-based storage arrays on the market. And the only way those arrays can attain enterprise-grade performance is by overprovisioning (which decreases the amount of usable storage) or by adding a persistent memory tier (which significantly increases cost).


How did Pure Storage integrate?

So what has Pure done differently? Crucially, the hardware and software engineers who built QLC support into FlashArray//C built on Pure’s unique vertically integrated architecture. Instead of using flash solid-state drive (SSD) modules like other storage vendors, Pure’s proprietary DirectFlash® modules connect raw flash directly to the FlashArray™ storage via NVMe, which reduces latency and increases throughput. And unlike traditional SSDs that use a flash controller or flash translation layer, DirectFlash is primarily raw flash. The flash translation takes place in the software.

This architecture allows the Purity operating environment to schedule and place data on the storage media with extreme precision, overcoming the technical challenges that have constrained other vendors.


For more information do read “Hybrid Arrays – Not Dead Yet, But … QLC Flash Is Here

showmount fails with clnt_create: RPC: Program not registered from a NFS client communicating with a NetApp filer

1. Assuming this is your mount command

mount -t nfs -o vers=3 XXX.XXX.XXX.XXX:/myserver/nfs /myclient/nfs


2. And if you are using showmount command from an NFS client and the following are observed

clnt_create: RPC: Program not registered


3. You have to access the NetApp Storage to check the NFS Protocol is enabled. I’m using the Netapp OnCommand System Manager

4. Check that the NFS Client can mount.

showmount --no-headers -e nfs_server
/ (everyone)


  1. showmount fails with clnt_create: RPC: Program not registered when executed from a RHEL6 NFS client communicating with a NetApp filer