Basic Installing and Configuring of GPFS Cluster (Part 3)

Step 8: Starting up GPFS Daemon on all the nodes

# mmstartup -a
Fri Aug 31 21:58:56 EST 2010: mmstartup: Starting GPFS ...

Step 9: Ensure all the GPFS daemon (mmfsd) is active on all the node before proceeding

# mmgetstate -a

Node number  Node name   GPFS state
-----------------------------------
1            nsd1        active
2            nsd2        active
3            node1       active
4            node2       active
5            node3       active
6            node4       active
7            node5       active
8            node6       active

More Information:

  1. Basic Installing and Configuring of GPFS Cluster (Part 1)
  2. Basic Installing and Configuring of GPFS Cluster (Part 2)
  3. Basic Installing and Configuring of GPFS Cluster (Part 3)
  4. Basic Installing and Configuring of GPFS Cluster (Part 4)
Advertisement

Basic Installing and Configuring of GPFS Cluster (Part 2)

This is a continuation of Installing and configuring of GPFS Cluster (Part 1).

Step 4b: Verify License Settings (mmlslicense)

# mmlslicense
Summary information
---------------------
Number of nodes defined in the cluster:                         33
Number of nodes with server license designation:                 3
Number of nodes with client license designation:                30
Number of nodes still requiring server license designation:      0
Number of nodes still requiring client license designation:      0

Step 5a: Configure Cluster Settings

# mmchconfig maxMBps=2000,maxblocksize=4m,pagepool=2000m,autoload=yes,adminMode=allToAll
  • maxMBps specifies the limit of LAN bandwidth per node. To get peak rate, set it to approximately 2x the desired bandwidth. For InfiniBand QDR, maxMBps=6000 is recommended
  • maxblocksize specifies the maximum file-system blocksize. As the typical file-size and transaction-size are unknown, maxblocksize=4m is recommended
  • pagepool specifies the size of the GPFS cache. If you are using applications that display temporal locality, pagepool > 1G is recommended, otherwise, pagepool=1G is sufficient
  • autoload specifies whether the cluster should automatically load mmfsd when a node is rebooted
  • adminMode specifies whether all nodes allow passwordless root access (allToAll) or whether only a subset of the nodes allow passwordless root access (client).

Step 5b: Verify Cluster Settings

# mmlsconfig
Configuration data for cluster nsd1:
----------------------------------------
myNodeConfigNumber 1
clusterName nsd1-nas
clusterId 130000000000
autoload yes
minReleaseLevel 3.4.0.7
dmapiFileHandleSize 32
maxMBpS 2000
maxblocksize 4m
pagepool 1000m
adminMode allToAll

File systems in cluster nsd1:
---------------------------------
/dev/gpfs1

Step 6: Check the InfiniBand communication method and details using the ibstatus command

Infiniband device 'mlx4_0' port 1 status:

       default gid:     fe80:0000:0000:0000:0002:c903:0006:d403
        base lid:        0x2
        sm lid:          0x2
        state:           4: ACTIVE
        phys state:      5: LinkUp
        rate:            40 Gb/sec (4X QDR)        
        link_layer:      InfiniBand

Step 7 (if you are using RDMA): Change the GPFS configuration to ensure RDMA is used instead of IP over IB (double the performance)

# mmchconfig verbsRdma=enable,verbsPorts=mlx4_0/1
mmchconfig: Command successfully completed
mmchconfig: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

More Information

  1. Basic Installing and Configuring of GPFS Cluster (Part 1)
  2. Basic Installing and Configuring of GPFS Cluster (Part 2)
  3. Basic Installing and Configuring of GPFS Cluster (Part 3)
  4. Basic Installing and Configuring of GPFS Cluster (Part 4)

Basic Installing and Configuring of GPFS Cluster (Part 1)

This tutorial is a brief writeup of setting up the General Parallel Fils System (GPFS) Networked Shared Disk (NSD). For more detailed and comprehensive, do look at GPFS: Concepts, Planning, and Installation Guide. for a detailed understanding of the underlying principles of quorum manager. This tutorial only deals with the technical setup

Step 1: Preparation

All Nodes to be installed with GPFS should be installed with supported Operating System; For Linux, it should be  SLES and RHEL.

  1. The nodes should be able to communicate with each other and password-less ssh should be configured for all nodes in the cluster.
  2. Create an installation directory where you can put all the base and update rpm. For example, /gpfs_install. Copy all the
  3. Build the portability layer for each node with a different architecture or kernel level. For more information see,  Installing GPFS 3.4 Packages. For ease of installation, put all the rpm at /gpfs_install

Step 2: Export the path of GPFS commands

Remember to Export the PATH

# vim ~/.bashrc
export PATH=$PATH:/usr/lpp/mmfs/bin

Step 3: Setup of quorum manager and cluster

Just a nutshell explanation taken from GPFS: Concepts, Planning and installation Guide

Node quorum is the default quorum algorithm for GPFS™. With node quorum:

  • Quorum is defined as one plus half of the explicitly defined quorum nodes in the GPFS cluster.
  • There are no default quorum nodes; you must specify which nodes have this role.
  • For example, in Figure 1, there are three quorum nodes. In this configuration, GPFS remains active as long as there are two quorum nodes available.

Create node_spec.lst at /gpfs_install containing a list of all the nodes in the cluster

# vim node_spec.lst
nsd1:quorum-manager
nsd2:quorum-manager
node1:quorum
node2
node3
node4
node5
node6

Create the gpfs cluster using the created file

# mmcrcluster -n node_spec.lst -p nsd1 -s nsd2 -R /usr/bin/scp -r /usr/bin/ssh
Fri Aug 10 14:40:53 SGT 2012: mmcrcluster: Processing node nsd1-nas
Fri Aug 10 14:40:54 SGT 2012: mmcrcluster: Processing node nsd2-nas
Fri Aug 10 14:40:54 SGT 2012: mmcrcluster: Processing node avocado-h00-nas
mmcrcluster: Command successfully completed
mmcrcluster: Warning: Not all nodes have proper GPFS license designations.
Use the mmchlicense command to designate licenses as needed.
mmcrcluster: Propagating the cluster configuration data to all
affected nodes.  This is an asynchronous process.

-n: list of nodes to be included in the cluster
-p: primary GPFS cluster configuration server node
-s: secondary GPFS cluster configuration server node
-R: remote copy command (e.g., rcp or scp)
-r: remote shell command (e.g., rsh or ssh)

To check whether all nodes were properly added, use the mmlscluster command

# mmcluster
GPFS cluster information
========================
GPFS cluster name:         nsd1
GPFS cluster id:           1300000000000000000
GPFS UID domain:           nsd1
Remote shell command:      /usr/bin/ssh
Remote file copy command:  /usr/bin/scp

GPFS cluster configuration servers:
-----------------------------------
Primary server:    nsd1
Secondary server:  nsd2

Node  Daemon node name     IP address       Admin node name     Designation
---------------------------------------------------------------------------
1     nsd1                 192.168.5.60     nsd1-nas            quorum-manager
2     nsd2                 192.168.5.61     nsd2-nas            quorum-manager
3     node1                192.168.5.24     node1               quorum-manager

Step 4a: Setup license files (mmchliense)

Configure GPFS Server Licensing. Create a license file at /gpfs_install

# vim license_server.lst
nsd1
nsd2
node1
# mmchlicense  server --accept -N license_server.lst

The output will be

The following nodes will be designated as possessing GPFS server licenses:
nsd1
nsd2
node1
mmchlicense: Command successfully completed
mmchlicense: Propagating the cluster configuration data to all
affected nodes.  This is an asynchronous process.

Configuring GPFS Client Licensing. Create a file at /gpfs_install

# vim license_client.lst
node2
node3
node4
node5
node6
# mmchlicense client --accept -N license_client.lst

The output will be

The following nodes will be designated as possessing GPFS client licenses:
node2
node3
node4
node5
node6

mmchlicense: Command successfully completed
mmchlicense: Propagating the cluster configuration data to all
affected nodes.  This is an asynchronous process.

More information

  1. Basic Installing and Configuring of GPFS Cluster (Part 1)
  2. Basic Installing and Configuring of GPFS Cluster (Part 2)
  3. Basic Installing and Configuring of GPFS Cluster (Part 3)
  4. Basic Installing and Configuring of GPFS Cluster (Part 4)