This is a continuation of Installing and configuring of GPFS Cluster (Part 1).
Step 4b: Verify License Settings (mmlslicense)
# mmlslicense
Summary information --------------------- Number of nodes defined in the cluster: 33 Number of nodes with server license designation: 3 Number of nodes with client license designation: 30 Number of nodes still requiring server license designation: 0 Number of nodes still requiring client license designation: 0
Step 5a: Configure Cluster Settings
# mmchconfig maxMBps=2000,maxblocksize=4m,pagepool=2000m,autoload=yes,adminMode=allToAll
- maxMBps specifies the limit of LAN bandwidth per node. To get peak rate, set it to approximately 2x the desired bandwidth. For InfiniBand QDR, maxMBps=6000 is recommended
- maxblocksize specifies the maximum file-system blocksize. As the typical file-size and transaction-size are unknown, maxblocksize=4m is recommended
- pagepool specifies the size of the GPFS cache. If you are using applications that display temporal locality, pagepool > 1G is recommended, otherwise, pagepool=1G is sufficient
- autoload specifies whether the cluster should automatically load mmfsd when a node is rebooted
- adminMode specifies whether all nodes allow passwordless root access (allToAll) or whether only a subset of the nodes allow passwordless root access (client).
Step 5b: Verify Cluster Settings
# mmlsconfig
Configuration data for cluster nsd1: ---------------------------------------- myNodeConfigNumber 1 clusterName nsd1-nas clusterId 130000000000 autoload yes minReleaseLevel 3.4.0.7 dmapiFileHandleSize 32 maxMBpS 2000 maxblocksize 4m pagepool 1000m adminMode allToAll File systems in cluster nsd1: --------------------------------- /dev/gpfs1
Step 6: Check the InfiniBand communication method and details using the ibstatus command
Infiniband device 'mlx4_0' port 1 status: default gid: fe80:0000:0000:0000:0002:c903:0006:d403 base lid: 0x2 sm lid: 0x2 state: 4: ACTIVE phys state: 5: LinkUp rate: 40 Gb/sec (4X QDR) link_layer: InfiniBand
Step 7 (if you are using RDMA): Change the GPFS configuration to ensure RDMA is used instead of IP over IB (double the performance)
# mmchconfig verbsRdma=enable,verbsPorts=mlx4_0/1
mmchconfig: Command successfully completed mmchconfig: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.
More Information