IBM Spectrum Scale Container Native Storage Access (CNSA)

IBM Spectrum Scale Container Native Storage Access (CNSA) allows the deployment of Spectrum Scale in a Red Hat OpenShift cluster. Using a remote mount attached file system, CNSA provides a persistent data store to be accessed by the applications via the IBM Spectrum Scale Container Storage Interface (CSI) driver using Persistent Volumes (PVs).

Digital Scalable multi-node training for AI jobs on NVIDIA DGX, OpenShift and Spectrum Scale

Nvidia and IBM did a complex proof-of-concept to demonstrate the scaling of AI workload using Nvidia DGX, Red Hat OpenShift and IBM Spectrum Scale at the example of ResNet-50 and the segmentation of images using the Audi A2D2 dataset. The project team published an IBM Redpaper with all the technical details and will present the key learnings and results.

WekaIO Beats Big Systems on the IO-500 10 Node Challenge

What is IO-500 Node Challenge?

The IO-500 10 Node Challenge is a ranked list comparing storage systems that work in tandem with the world’s largest supercomputers. By limiting the benchmark to 10 nodes, the test challenges single client performance from the storage system. Each system is evaluated using the IO-500 benchmark that measures the storage performance using read/write bandwidth for large files and read/write/listing performance for small files…. from InsideHPC

For more information, do look at WekaIO Beats Big Systems on the IO-500 10 Node Challenge

Spectrum Scale Solutions

  1. NVMe storage via RDMA storage via E8, Excelero
    Lowest-Latency Distributed Block Storage for IBM Spectrum Scale
    Excelero NVMesh, Lowest-Latency Distributed Block Storage for IBM Spectrum Scale
  2. Community server + Spectrum Scale Erasure coding
    IBM Spectrum LSF and IBM Spectrum Scale User Group Erasure Code Edition
  3. IBM ESS NVMe edition (going to be released in this Q4)
  4. Existing IBM ESS
    Accelerate with IBM Storage: Building and Deploying Elastic Storage Server (ESS)

Spectrum Scale User Group, SCA19 Singapore (March)

Taken from

Technical Blogs on IBM Spectrum Scale v5.0.2.0

  1. How NFS exports became more dynamic with Spectrum Scale 5.0.2
  2. HPC storage on AWS (IBM Spectrum Scale)
  3. Upgrade with Excluding the node(s) using Install-toolkit
  4. Offline upgrade using Install-toolkit
  5. IBM Spectrum Scale for Linux on IBM Z ? What’s new in IBM Spectrum Scale 5.0.2
  6. What’s New in IBM Spectrum Scale 5.0.2
  7. Starting IBM Spectrum Scale 5.0.2 release, the installation toolkit supports upgrade rerun if fresh upgrade fails.
  8. IBM Spectrum Scale installation toolkit enhancements over releases
  9. Announcing HDP 3.0 support with IBM Spectrum Scale
  10. IBM Spectrum Scale Tuning Overview for Hadoop Workload
  11. Making the Most of Multicloud Storage
  12. Disaster Recovery for Transparent Cloud Tiering using SOBAR
  13. Your Optimal Choice of AI Storage for Today and Tomorrow
  14. Analyze IBM Spectrum Scale File Access Audit with ELK Stack
  15. Mellanox SX1710 40G switch MLAG configuration for IBM ESS
  16. Protocol Problem Determination Guide for IBM Spectrum Scale SMB and NFS Access issues
  17. Access Control in IBM Spectrum Scale Object
  18. IBM Spectrum Scale HDFS Transparency Docker support
  19. Protocol Problem Determination Guide for IBM Spectrum Scale Log Collection

Faulty disks accepting I/O request and not returning any failure for GPFS

We have encountered a situation where a defunct disk was accepting IO request and did not return any failure in time. As a result, these IO requests hanged there till time out (default 10 seconds). Typically, Spectrum Scale/GPFS will fail to read or write a disk, the failure is written in log and we have to shift IO to other available disks which should be quick.

Normally such operations should return in 20 milliseconds or less. When we have IO timeout, this request has wasted us
10 seconds / 20 milliseconds = 500 times of time. Even if Spectrum Scale/GPFS is able to choose a fast disk in the second attempt, we are much slower than normal.

Due to the utilization of striping technology, a bad/slow disks always affects IO of many files, much more than the situation without striping. IO on the same file involves more than several disks, and the IO has to wait for the slowest request to return. So a bad/slow disk may have considerable influence on Spectrum Scale/GPFS performance.