This tutorial is a brief writeup of setting up the General Parallel Fils System (GPFS) Networked Shared Disk (NSD). For more detailed and comprehensive, do look at GPFS: Concepts, Planning, and Installation Guide. for a detailed understanding of the underlying principles of quorum manager. This tutorial only deals with the technical setup
Step 1: Preparation
All Nodes to be installed with GPFS should be installed with supported Operating System; For Linux, it should be SLES and RHEL.
- The nodes should be able to communicate with each other and password-less ssh should be configured for all nodes in the cluster.
- Create an installation directory where you can put all the base and update rpm. For example, /gpfs_install. Copy all the
- Build the portability layer for each node with a different architecture or kernel level. For more information see, Installing GPFS 3.4 Packages. For ease of installation, put all the rpm at /gpfs_install
Step 2: Export the path of GPFS commands
Remember to Export the PATH
# vim ~/.bashrc
Step 3: Setup of quorum manager and cluster
Just a nutshell explanation taken from GPFS: Concepts, Planning and installation Guide
Node quorum is the default quorum algorithm for GPFS™. With node quorum:
- Quorum is defined as one plus half of the explicitly defined quorum nodes in the GPFS cluster.
- There are no default quorum nodes; you must specify which nodes have this role.
- For example, in Figure 1, there are three quorum nodes. In this configuration, GPFS remains active as long as there are two quorum nodes available.
Create node_spec.lst at /gpfs_install containing a list of all the nodes in the cluster
# vim node_spec.lst
nsd1:quorum-manager nsd2:quorum-manager node1:quorum node2 node3 node4 node5 node6
Create the gpfs cluster using the created file
# mmcrcluster -n node_spec.lst -p nsd1 -s nsd2 -R /usr/bin/scp -r /usr/bin/ssh
Fri Aug 10 14:40:53 SGT 2012: mmcrcluster: Processing node nsd1-nas Fri Aug 10 14:40:54 SGT 2012: mmcrcluster: Processing node nsd2-nas Fri Aug 10 14:40:54 SGT 2012: mmcrcluster: Processing node avocado-h00-nas mmcrcluster: Command successfully completed mmcrcluster: Warning: Not all nodes have proper GPFS license designations. Use the mmchlicense command to designate licenses as needed. mmcrcluster: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.
-n: list of nodes to be included in the cluster
-p: primary GPFS cluster configuration server node
-s: secondary GPFS cluster configuration server node
-R: remote copy command (e.g., rcp or scp)
-r: remote shell command (e.g., rsh or ssh)
To check whether all nodes were properly added, use the mmlscluster command
GPFS cluster information ======================== GPFS cluster name: nsd1 GPFS cluster id: 1300000000000000000 GPFS UID domain: nsd1 Remote shell command: /usr/bin/ssh Remote file copy command: /usr/bin/scp GPFS cluster configuration servers: ----------------------------------- Primary server: nsd1 Secondary server: nsd2 Node Daemon node name IP address Admin node name Designation --------------------------------------------------------------------------- 1 nsd1 192.168.5.60 nsd1-nas quorum-manager 2 nsd2 192.168.5.61 nsd2-nas quorum-manager 3 node1 192.168.5.24 node1 quorum-manager
Step 4a: Setup license files (mmchliense)
Configure GPFS Server Licensing. Create a license file at /gpfs_install
# vim license_server.lst
nsd1 nsd2 node1
# mmchlicense server --accept -N license_server.lst
The output will be
The following nodes will be designated as possessing GPFS server licenses: nsd1 nsd2 node1 mmchlicense: Command successfully completed mmchlicense: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.
Configuring GPFS Client Licensing. Create a file at /gpfs_install
# vim license_client.lst
node2 node3 node4 node5 node6
# mmchlicense client --accept -N license_client.lst
The output will be
The following nodes will be designated as possessing GPFS client licenses: node2 node3 node4 node5 node6 mmchlicense: Command successfully completed mmchlicense: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.