Technical Blogs on IBM Spectrum Scale v5.0.2.0

  1. How NFS exports became more dynamic with Spectrum Scale 5.0.2
    https://developer.ibm.com/storage/2018/10/02/nfs-exports-became-dynamic-spectrum-scale-5-0-2/
  2. HPC storage on AWS (IBM Spectrum Scale)
    https://developer.ibm.com/storage/2018/10/02/hpc-storage-aws-ibm-spectrum-scale/
  3. Upgrade with Excluding the node(s) using Install-toolkit
    https://developer.ibm.com/storage/2018/09/30/upgrade-excluding-nodes-using-install-toolkit/
  4. Offline upgrade using Install-toolkit
    https://developer.ibm.com/storage/2018/09/30/offline-upgrade-using-install-toolkit/
  5. IBM Spectrum Scale for Linux on IBM Z ? What’s new in IBM Spectrum Scale 5.0.2
    https://developer.ibm.com/storage/2018/09/21/ibm-spectrum-scale-for-linux-on-ibm-z-whats-new-in-ibm-spectrum-scale-5-0-2/
  6. What’s New in IBM Spectrum Scale 5.0.2
    https://developer.ibm.com/storage/2018/09/15/whats-new-ibm-spectrum-scale-5-0-2/
  7. Starting IBM Spectrum Scale 5.0.2 release, the installation toolkit supports upgrade rerun if fresh upgrade fails.
    https://developer.ibm.com/storage/2018/09/15/starting-ibm-spectrum-scale-5-0-2-release-installation-toolkit-supports-upgrade-rerun-fresh-upgrade-fails/
  8. IBM Spectrum Scale installation toolkit enhancements over releases 5.0.2.0
    https://developer.ibm.com/storage/2018/09/15/ibm-spectrum-scale-installation-toolkit-enhancements-releases-5-0-2-0/
  9. Announcing HDP 3.0 support with IBM Spectrum Scale
    https://developer.ibm.com/storage/2018/08/31/announcing-hdp-3-0-support-ibm-spectrum-scale/
  10. IBM Spectrum Scale Tuning Overview for Hadoop Workload
    https://developer.ibm.com/storage/2018/08/20/ibm-spectrum-scale-tuning-overview-hadoop-workload/
  11. Making the Most of Multicloud Storage
    https://developer.ibm.com/storage/2018/08/13/making-multicloud-storage/
  12. Disaster Recovery for Transparent Cloud Tiering using SOBAR
    https://developer.ibm.com/storage/2018/08/13/disaster-recovery-transparent-cloud-tiering-using-sobar/
  13. Your Optimal Choice of AI Storage for Today and Tomorrow
    https://developer.ibm.com/storage/2018/08/10/spectrum-scale-ai-workloads/
  14. Analyze IBM Spectrum Scale File Access Audit with ELK Stack
    https://developer.ibm.com/storage/2018/07/30/analyze-ibm-spectrum-scale-file-access-audit-elk-stack/
  15. Mellanox SX1710 40G switch MLAG configuration for IBM ESS
    https://developer.ibm.com/storage/2018/07/12/mellanox-sx1710-40g-switcher-mlag-configuration/
  16. Protocol Problem Determination Guide for IBM Spectrum Scale SMB and NFS Access issues
    https://developer.ibm.com/storage/2018/07/10/protocol-problem-determination-guide-ibm-spectrum-scale-smb-nfs-access-issues/
  17. Access Control in IBM Spectrum Scale Object
    https://developer.ibm.com/storage/2018/07/06/access-control-ibm-spectrum-scale-object/
  18. IBM Spectrum Scale HDFS Transparency Docker support
    https://developer.ibm.com/storage/2018/07/06/ibm-spectrum-scale-hdfs-transparency-docker-support/
  19. Protocol Problem Determination Guide for IBM Spectrum Scale Log Collection
    https://developer.ibm.com/storage/2018/07/04/protocol-problem-determination-guide-ibm-spectrum-scale-log-collection/

Faulty disks accepting I/O request and not returning any failure for GPFS

We have encountered a situation where a defunct disk was accepting IO request and did not return any failure in time. As a result, these IO requests hanged there till time out (default 10 seconds). Typically, Spectrum Scale/GPFS will fail to read or write a disk, the failure is written in log and we have to shift IO to other available disks which should be quick.

Normally such operations should return in 20 milliseconds or less. When we have IO timeout, this request has wasted us
10 seconds / 20 milliseconds = 500 times of time. Even if Spectrum Scale/GPFS is able to choose a fast disk in the second attempt, we are much slower than normal.

Due to the utilization of striping technology, a bad/slow disks always affects IO of many files, much more than the situation without striping. IO on the same file involves more than several disks, and the IO has to wait for the slowest request to return. So a bad/slow disk may have considerable influence on Spectrum Scale/GPFS performance.

Pre-check before restarting the NSD Nodes

Before restarting the NSD Nodes or Quorum Manager Nodes or other critical nodes, do check the following first to ensure the file system is in the right order before restarting.

1. Make sure all three quorum nodes are active.

# mmgetstate -N quorumnodes

*If any machine is not active, do *not* proceed

2. Make sure file system is mounted on machines

# mmlsmount gpfs0

If the file system is not mounted somewhere, we should try to resolve it first.

Spectrum Scale User Group @ London (April)

There was a good and varied topics being discussed at the Spectrum Scale

Basic Tuning of RDMA Parameters for Spectrum Scale

If your cluster has symptoms of overload and GPFS kept reporting “overloaded” in GPFS logs like the ones below, you might get long waiters and sometimes deadlocks.

Wed Apr 11 15:53:44.232 2018: [I] Sending 'overloaded' status to the entire cluster
Wed Apr 11 15:55:24.488 2018: [I] Sending 'overloaded' status to the entire cluster
Wed Apr 11 15:57:04.743 2018: [I] Sending 'overloaded' status to the entire cluster
Wed Apr 11 15:58:44.998 2018: [I] Sending 'overloaded' status to the entire cluster
Wed Apr 11 16:00:25.253 2018: [I] Sending 'overloaded' status to the entire cluster
Wed Apr 11 16:28:45.601 2018: [I] Sending 'overloaded' status to the entire cluster
Wed Apr 11 16:33:56.817 2018: [N] sdrServ: Received deadlock notification from

Increase scatterBuffersize to a Number that match IB Fabric
One of the first tuning will be to tune the scatterBufferSize. According to the wiki, FDR10 can be tuned to 131072 and FDR14 can be tuned to 262144

The default value of 32768 may perform OK. If the CPU utilization on the NSD IO servers is observed to be high and client IO performance is lower than expected, increasing the value of scatterBufferSize on the clients may improve performance.

# mmchconfig scatterBufferSize=131072

There are other parameters which can be tuned. But the scatterBufferSize worked immediately for me.
verbsRdmaSend
verbsRdmasPerConnection
verbsRdmasPerNode

Disable  verbsRdmaSend=no

# mmchconfig verbsRdmaSend=no -N nsd1,nsd2

Verify settings has taken place

# mmfsadm dump config | grep verbsRdmasPerNode

Increase verbsRdmasPerNode to 514 for NSD Nodes

# mmchonfig verbsRdmasPerNode=514 -N nsd1,nsd2

References:

  1. Best Practices RDMA Tuning

IBM Spectrum Scale Development Blogs for (Q1 2018)

Here are list of development blogs in the this quarter (Q1 2018). As discussed in User Groups, passing it along:

GDPR Compliance and Unstructured Data Storage
https://developer.ibm.com/storage/2018/03/27/gdpr-compliance-unstructure-data-storage/

IBM Spectrum Scale for Linux on IBM Z ? Release 5.0 features and highlights
https://developer.ibm.com/storage/2018/03/09/ibm-spectrum-scale-linux-ibm-z-release-5-0-features-highlights/

Management GUI enhancements in IBM Spectrum Scale release 5.0.0
https://developer.ibm.com/storage/2018/01/18/gui-enhancements-in-spectrum-scale-release-5-0-0/

IBM Spectrum Scale 5.0.0 ? What?s new in NFS?
https://developer.ibm.com/storage/2018/01/18/ibm-spectrum-scale-5-0-0-whats-new-nfs/

Benefits and implementation of Spectrum Scale sudo wrappers
https://developer.ibm.com/storage/2018/01/15/benefits-implementation-spectrum-scale-sudo-wrappers/

IBM Spectrum Scale: Big Data and Analytics Solution Brief
https://developer.ibm.com/storage/2018/01/15/ibm-spectrum-scale-big-data-analytics-solution-brief/

Variant Sub-blocks in Spectrum Scale 5.0
https://developer.ibm.com/storage/2018/01/11/spectrum-scale-variant-sub-blocks/

Compression support in Spectrum Scale 5.0.0
https://developer.ibm.com/storage/2018/01/11/compression-support-spectrum-scale-5-0-0/

IBM Spectrum Scale Versus Apache Hadoop HDFS
https://developer.ibm.com/storage/2018/01/10/spectrumscale_vs_hdfs/

ESS Fault Tolerance
https://developer.ibm.com/storage/2018/01/09/ess-fault-tolerance/

Genomic Workloads ? How To Get it Right From Infrastructure Point Of View.
https://developer.ibm.com/storage/2018/01/06/genomic-workloads-get-right-infrastructure-point-view/

IBM Spectrum Scale On AWS Cloud: This video explains how to deploy IBM Spectrum Scale on AWS. This solution helps the users who require highly available access to a shared name space across multiple instances with good performance, without requiring an in-depth knowledge of IBM Spectrum Scale.

Detailed Demo : https://www.youtube.com/watch?v=6j5Xj_d0bh4
Brief Demo : https://www.youtube.com/watch?v=-aMQKPW_RfY

Removing Existing NSD Nodes from GPFS Cluster

Removing existing NSD Nodes from the GPFS Cluster is not difficult, but there are several steps to take note of.

Step 1: Check and make sure the NSD Nodes you are removing are not quorum-manager.

See Removing Quorum Manager from NSD Nodes

Step 2: Check that Quorum-Manager have been removed from the old NSD

# mmlscluster

GPFS cluster information
========================
GPFS cluster name:         mygpfs.gpfsnsd1
GPFS cluster id:           720691660936079521
GPFS UID domain:           mygpfs.gpfsnsd1
Remote shell command:      /usr/bin/ssh
Remote file copy command:  /usr/bin/scp

GPFS cluster configuration servers:
-----------------------------------
Primary server:    newnsd3
Secondary server:  newnsd4
.....
.....
.....
.....
714   oldnsd1          192.168.111.5   oldnsd1
715   oldnsd2          192.168.111.6   oldnsd2
716   newnsd3          192.168.111.7   newnsd3         quorum-manager
717   newnsd4          192.168.111.8   newnsd4         quorum-manager

Step 3a: Umount the GPFS File System

# mmumount all -a

Step 3b: Check all the GPFS nodes have been unmounted

# mmlsmount all

File system gpfs0 is mounted on 0 nodes.

Step 4: Displays Network Shared Disk (NSD) information for the GPFS cluster.

# mmlsnsd

File system   Disk name    NSD servers
---------------------------------------------------------------------------
gpfs0         dcs3700A_2   File system   Disk name    NSD servers
---------------------------------------------------------------------------
gpfs0         dcs3700A_2   newnsd3,oldnsd1
gpfs0         dcs3700A_3   newnsd3,oldnsd1
gpfs0         dcs3700A_4   newnsd3,oldnsd1
gpfs0         dcs3700A_5   newnsd3,oldnsd1
gpfs0         dcs3700A_6   newnsd3,oldnsd1
gpfs0         dcs3700A_7   newnsd3,oldnsd1
gpfs0         dcs3700B_2   newnsd4,oldnsd2
gpfs0         dcs3700B_3   newnsd4,oldnsd2
gpfs0         dcs3700B_4   newnsd4,oldnsd2
gpfs0         dcs3700B_5   newnsd4,oldnsd2
gpfs0         dcs3700B_6   newnsd4,oldnsd2
gpfs0         dcs3700B_7   newnsd4,oldnsd2

Step 5: Changes Network Shared Disk (NSD) configuration attributes.

# mmchnsd dcs3700A_2:newnsd3

To confirm the changes, issue this command:

# mmlsnsd -d dcs3700A_2
File system   Disk name    NSD servers
---------------------------------------------------------------------------
 gpfs0         dcs3700A_2   newnsd3

Do for the rest…..

# mmchnsd dcs3700A_3:newnsd3
# mmlsnsd -d dcs3700A_3
# mmchnsd dcs3700A_4:newnsd3
# mmlsnsd -d dcs3700A_4
# mmchnsd dcs3700A_5:newnsd3
# mmlsnsd -d dcs3700A_5
# mmchnsd dcs3700A_6:newnsd3
# mmlsnsd -d dcs3700A_6
# mmchnsd dcs3700A_7:newnsd3
# mmlsnsd -d dcs3700A_7
# mmchnsd dcs3700B_2:newnsd4
# mmlsnsd -d dcs3700B_2
# mmchnsd dcs3700B_3:newnsd4
# mmlsnsd -d dcs3700B_3
# mmchnsd dcs3700B_4:newnsd4
# mmlsnsd -d dcs3700B_4
# mmchnsd dcs3700B_5:newnsd4
# mmlsnsd -d dcs3700B_5
# mmchnsd dcs3700B_6:newnsd4
# mmlsnsd -d dcs3700B_6
# mmchnsd dcs3700B_7:newnsd4
# mmlsnsd -d dcs3700B_7

Step 6: Remove Old NSD Nodes

Once you have confirmed that the NSD nodes are ok.

# mmdelnode -N oldnsd1,oldnsd2

Verify that the old NSD nodes are no more in the cluster

# mmlscluster

Step 7: Remount the File System

# mmmount all -a

References:

  1. General Parallel File System