Step 10: Create a NSD Specification File at
At /gpfs_install, create a disk.lst
# vim disk.lst
Example of the file using primary and secondary NSD are as followed
/dev/sdb:nsd1-nas,nsd2-nas::::ds4200_b /dev/sdc:nsd2-nas,nsd1-nas::::ds4200_c
The format is
s1:s2:s3:s4:s5:s6:s7
where
s1 = scsi device
s2 = NSD server list seperate by comma. Arrange in primary and secondary order
s3 = NULL (retained for legacy reasons)
s4 = usage
s5 = failure groups
s6 = NSD name
s7 = storage pool name
Step 11: Backup the disk.lst
Back up this specifications since its an input/output file for the mmcrnsd.
# cp disk.lst disk.lst.org
Step 12: Create the new NSD specification file
# mmcrnsd -F disk.lst -v no
-F = name of the NSD Specification File
-v = Check the disk is part of an eixsting GPFS file system or ever had a GPFS file system on it (if yes, mmcrnsd will not create it as a new NSD
mmcrnsd: Processing disk /dev/sdb mmcrnsd: Processing disk /dev/sdc mmcrnsd: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.
Step 13: Verify that the NSD is properly created.
# mmlsnsd
File system Disk name NSD servers --------------------------------------------------------------------------- gpfs1 ds4200_b nsd1-nas,nsd2-nas gpfs1 ds4200_c nsd2-nas,nsd1-nas
Step 14: Creating different partitions
If you are just creating a single partitions, the above will suffice. If you are creating more than 1 partition, you should allocate the appropriate number of LUNs and repeat Step 11 – 13. But for each partition you can use different “disk.lst” name such as disk2.lst, disk3.lst etc.
Step 15: Create the GPFS file system
# mmcrfs /gpfs1 gpfs1 -F disk.lst -A yes -B 1m -v no -n 50 -j scatter
/gpfs1 = a mount point
gpfs1 = device entry in /dev for the file system
-F = output file from the mmcrnsd command
-A = mount the file system automatically every time mmfsd is started
-B = actual block size for this file system; it can not be larger than the maxblocksize set by the mmchconfig command
-v = check if this disk is part of an existing GPFS file system or ever had a GPFS file system on it. If yes, mmcrfs will not include this disk in the file system
-n = estimated number of nodes that will mount this file system.
If you have more than 1 partitions, you have to create the file system
# mmcrfs /gpfs2 gpfs2 -F disk2.lst -A yes -B 1m -b no -n 50 -j scatter
The following disks of gpfs1 will be formatted on nsd1-nas ..... ..... Formatting file system Disk up to 2.7 TB can be added to storage pool 'dcs_4200' Creating Inode File Creating Allocation Maps Clearing Inode Allocation Map Clearing Block Allocation Map Formatting Allocation Map for storage pool 'system' ..... ..... mmcrfs: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.
Step 16: Verify GPFS Disk Status
# mmlsdisk gpfs1
disk driver sector failure holds holds storage name type size group metadata data status availability pool ------------ -------- ------ ------- -------- ----- ------------- ------------ ------------ ds4200_b nsd 512 4001 yes yes ready up system ds4200_c nsd 512 4002 yes yes ready up system
Step 17: Mount the file systems and checking permissions
# mmmount /gpfs1 -a
Fri Sep 11 12:50:17 EST 2012: mmmount: Mounting file systems ...
Change Permission for /gpfs1
# chmod 777 /gpfs1
Step 18: Checking and testing of file system
Adding time for dd to test and analyse read and write performance
Step 19: Update the /etc/fstab
LABEL=/ / ext3 defaults 1 1 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 LABEL=SWAP-sda2 swap swap defaults 0 0 ...... /dev/gpfs1 /gpfs_data gpfs rw,mtime,atime,dev=gpfs1,noauto 0 0
More Information: