Setting up Secondary Master Host on Platform LSF

To setup Secondary Master Host on Platform LSF can be very easy.

Step 1: Update the LSF_MASTER_LIST parameter in lsf.conf by updating the current master host

# cd $LSF_ENVDIR
# vim lsf.conf

At line 114

.....
LSF_MASTER_LIST="h00 h01"
.....

If you wish to switch the order of Master Host for maintenance, it can be done as well

Step 2: Reconfigure the cluster and restart the LSF mbatchd and mbschd processes

# lsadmin reconfig
# badmin mbdrestart

Step 3: Update the master_hosts in lsb.hosts

# cd $LSF_ENVDIR/lsbatch/yourhpclcuster/configdir/configdir
# vim lsb.hosts
Begin HostGroup
GROUP_NAME    GROUP_MEMBER      #GROUP_ADMIN # Key words
master_hosts      (h00 h01)
End HostGroup

References:

  1. Switch LSF master host to secondary master candidate

Submitting an interactive job on LSF Platform

Using a Pseudo-terminal to launch Interactive Job

Point 1: Submit a batch interactive job using a pseudo-terminal.

$ bsub -Ip vim output.log

Submits a batch interactive job to edit output.log.

Point 2:  Submit a batch interactive job and create a pseudo-terminal with shell mode support.

$ bsub -Is bash

Submits a batch interactive job that starts up bash as an interactive shell.

When you specify the -Is option, bsub submits a batch interactive job and creates a pseudo-terminal with shell mode support when the job starts.

References:

  1. Submit an interactive job by using a pseudo-terminal

Cleaning up Platform LSF parallel Job Execution Problems – Part 3

This refers to Parallel job abnormal task exit

This article is taken from Cleaning up Platform LSF parallel job execution problems

 If some tasks exit abnormally during parallel job execution, LSF takes action to terminate and clean up the entire job. This behaviour can be customized with RTASK_GONE_ACTION in an application profile in lsb.applications or with the LSB_DJOB_RTASK_GONE_ACTION environment variable in the job environment.
The LSB_DJOB_RTASK_GONE_ACTION environment variable overrides the setting of RTASK_GONE_ACTION in lsb.applications.
 The following values are supported:
[KILLJOB_TASKDONE | KILLJOB_TASKEXIT] [IGNORE_TASKCRASH]
KILLJOB_TASKDONE:               LSF terminates all tasks in the job when one remote task exits with a zero value.
KILLJOB_TASKEXIT:               LSF terminates all tasks in the job when one remote task exits with non-zero value.
IGNORE_TASKCRASH:              LSF does nothing when a remote task crashes. The job continues to run to completion.
By default, RTASK_GONE_ACTION is not defined, so LSF terminates all tasks, and shuts down the entire job when one task crashes.
 For example:
  • Define an application profile in lsb.applications:
Begin Application
NAME         = myApp
DJOB_COMMFAIL_ACTION=IGNORE_COMMFAIL
RTASK_GONE_ACTION=”IGNORE_TASKCRASH KILLJOB_TASKEXIT”
DESCRIPTION  = Application profile example
End Application
  • Run badmin reconfig as LSF administrator to make the configuration take effect.
  • Submit an MPICH2 job with –app myApp:
$ bsub –app myApp –n4 –R “span[ptile=2]” mpiexec.hydra ./cpi

References:

  1. Cleaning up parallel job execution problems
  2. Cleaning up Platform LSF parallel Job Execution Problems – Part 1
  3. Cleaning up Platform LSF parallel Job Execution Problems – Part 2
  4. Cleaning up Platform LSF parallel Job Execution Problems – Part 3

 

Compiling Intel BLAS95 and LAPACK95 Interface Wrapper Library

BLAS95 and LAPACK95 wrappers to Intel MKL are delivered both in Intel MKL and as source code which can be compiled to build to build standalone wrapper library with exactly the same functionality.

The source code for the wrappers, makefiles are found …..\interfaces\blas95 subdirectory in the Intel MKL Directory

For blas95

# cd $MKLROOT
# cd interfaces/blas95
# make libintel64  INSTALL_DIR=$MKLROOT/lib/intel64

Once Compiled, the libraries are kept $MKLROOT/lib/intel64

For Lapack95

# cd $MKLROOT
# cd interfaces/lapack95
# make libintel64  INSTALL_DIR=$MKLROOT/lib/intel64

Once Compiled, the libraries are kept $MKLROOT/lib/intel64

Cleaning up Platform LSF parallel Job Execution Problems – Part 2

This refers to Parallel job non-first execution host crashing or hanging.

Taken from IBM Spectrum LSF Wiki – Cleaning up Parallel Job Execution Problems

Scenario 1: LSB_FANOUT_TIMEOUT_PER_LAYER (lsf.conf)

Before a parallel job executes, LSF needs to do a few set up work on each job execution host and populate job information to all these hosts. LSF provides a communication fan-out framework to handle this. In the case of execution hosts failure, the framework has timeout value to control how quick LSF treats communication failure and roll back the job dispatching decision. By default, the timeout value is 20 seconds for each communication layer. Define LSB_FANOUT_TIMEOUT_PER_LAYER in lsf.conf to customize the timeout value.

# badmin hrestart all

Important Notes

  1.  LSB_FANOUT_TIMEOUT_PER_LAYER can also be defined in environment before job submission to override the value specified in lsf.conf.
  2. You can set a larger value for large size jobs (for example, 60 for jobs across over 1K nodes).
  3.  One indicator of the need to tune up this parameter is that bhist -l shows jobs bouncing back and forth between starting and pending due to job timeout errors. Timeout errors are logged in the sbatchd log.
$ bhist -l 100
Job , User , Project , Command 
Mon Oct 21 19:20:43: Submitted from host , to Queue , CW
                     D , 320 Processors Requested, Reque
                     sted Resources <span[ptile=8]>;
Mon Oct 21 19:20:43: Dispatched to 40 Hosts/Processors   <
……
Mon Oct 21 19:20:43: Starting (Pid 19137);
Mon Oct 21 19:21:06: Pending: Failed to send fan-out information to other SBDs;

Scenario 2: LSF_DJOB_TASK_REG_WAIT_TIME (lsf.conf)

When a parallel job is started, an LSF component on the first execution host needs to receive a registration message from other components on non-first execution hosts. By default, LSF waits for 300 seconds for those registration messages. After 300 seconds, LSF starts to clean up the job.

Use LSF_DJOB_TASK_REG_WAIT_TIME customize the time period. The parameter can be defined in lsf.conf or the job environment at job submission. The parameter in lsf.conf applies to all jobs in the cluster, while the job environment variable only controls the behaviour for the particular job. The job environment variable overrides the value in lsf.conf. The unit is seconds. Set a larger value for large jobs ( for example, 600 seconds for jobs across 5000 nodes).

# lsadmin resrestart

You should set this parameter if you see an INFO level message like the following in res.log.first_execution_host:

$ grep “waiting for all tasks to register” res.log.hostA
Oct 20 20:20:29 2013 7866 6 9.1.1 doHouseKeeping4ParallelJobs: job 101 timed out (20) waiting for all tasks to register, registered (315) out of (320)

3. DJOB_COMMFAIL_ACTION (lsb.applications)

After a job is successfully launched and all tasks register themselves, LSF keeps monitoring the connection from the first node to the rest of the execution nodes. If a connection failure is detected, by default, LSF begins to shut down the job. Configure DJOB_COMMFAIL_ACTION in an application profile in lsb.applications to customize the behaviour. The parameter syntax is:

DJOB_COMMFAIL_ACTION=”KILL_TASKS|IGNORE_COMMFAIL”

IGNORE_COMMFAIL:     LSF allows the job to continue to run. Communication failures between the first node and the rest of the execution nodes are ignored and the job continues.

KILL_TASKS    LSF tries to kill all the current tasks of a parallel or distributed job associated with the communication failure.

By default, DJOB_COMMFAIL_ACTION is not defined – LSF terminates all tasks and shuts down the entire job.

You can also use the environment variable LSB_DJOB_COMMFAIL_ACTION before submitting job to override the value set in the application profile.

References:

  1. Cleaning up parallel job execution problems
  2. Cleaning up Platform LSF parallel Job Execution Problems – Part 1
  3. Cleaning up Platform LSF parallel Job Execution Problems – Part 2
  4. Cleaning up Platform LSF parallel Job Execution Problems – Part 3

Cleaning up Platform LSF parallel Job Execution Problems – Part 1

Taken from IBM Spectrum LSF Wiki – Cleaning up Parallel Job Execution Problems. 

 Job cleanup refers to the following:
  1. Clean up all left-over processes on all execution nodes
  2. Perform post-job cleanup operations on all execution nodes, such as cleaning up cgroups, cleaning up Kerberos credentials, resetting CPU frequencies, etc.
  3. Clean up the job from LSF and mark job Exit status

The LSF default behavior is designed to handle most common recovery for these scenarios. LSF also offers a set of parameters to allow end users to tune LSF behavior for each scenario, especially how fast LSF can detect each failure and what action LSF should take in response.

 There are typically three scenarios requiring job cleanup:
  1. First execution host crashing or hanging
  2. Non-first execution host crashing or hanging
  3. Parallel job tasks exit abnormally
This article describes how to configure LSF to handle these scenarios.
 Scenario 1 – Parallel job first execution host crashing or hanging
When the first execution host crashes or hangs, by default, LSF will mark a running job as UNKNOWN. LSF does not clean up the job until the host comes back and the LSF master confirms that the job is really gone from the system. However, this default behaviour may not always be desirable, since such hung jobs will hold their resource allocation for some time. Define REMOVE_HUNG_JOBS_FOR in lsb.params to change the default LSF behaviour and remove the hung jobs from the system automatically.

In lsb.params

REMOVE_HUNG_JOBS_FOR = runlimit:host_unavail

LSF removes jobs if they run 10 minutes past the job RUN LIMIT or become UNKNOWN for 10 minutes due to the first execution host becoming unavailable. If you want to change the timing

 In lsb.params
REMOVE_HUNG_JOBS_FOR = runlimit[,wait_time=5]:host_unavail[,wait_time=5]

 

Other Information:

For DJOB_HB_INTERVAL, DJOB_RU_INTERVAL (lsb.applications) and LSF_RES_ALIVE_TIMEOUT (lsf.conf)

  1. The default value of LSB_DJOB_HB_INTERVAL is 120 seconds per 1000 nodes
  2. The default value of LSB_DJOB_RU_INTERVAL is 300 seconds per 1000 nodes

In case of large, long running parallel jobs, LSB_DJOB_RU_INTERVAL can be set to a long time or even disabled with a 0 value to prevent too frequent resource usage update, which consumes network bandwidth as well as CPU time for LSF to process large volume of resource usage information. LSB_DJOB_HB_INTERVAL cannot be disabled.

References: 

  1. Cleaning up parallel job execution problems
  2. Cleaning up Platform LSF parallel Job Execution Problems – Part 1
  3. Cleaning up Platform LSF parallel Job Execution Problems – Part 2
  4. Cleaning up Platform LSF parallel Job Execution Problems – Part 3

Submitting Jobs with Topology Scheduling on Platform LSF

This blog is a follow-up Topology Scheduling on Platform LSF

Scenario 1: Submit directly to a specific Compute Unit Type

$ bsub -m "r1" -n 64 ./a.out

This job asks for 64 slots, all of which must be on hosts in the CU r1.

Scenario 2: Requesting for a Compute Unit Type Level (For example rack)

$ bsub -R "cu[type=rack]" -n 64 ./a.out

Scenario 3: Sequential Job Packing
The following job sets a CU uses minavail to set preferences for the fewest free slots.

$ bsub -R "cu[pref=minavail]" ./a.out

Scenario 4: Parallel Job Packing
The following job sets a CU uses maxavail to set a preference for the largest free slots

$ bsub -R "cu[pref=maxavail]" -n 64 ./a.out

Scenario 5: Limiting the number of spaning of multiple CUs
The following allow a job to span 2 CUs of CU-Type belonging to “rack” with the largest free slots

$ bsub -R "cu[type=rack:pref=maxavail:maxcus=2]" -n 32 ./a.out

Preferences:

  1. Using Compute Units for Topology Scheduling
  2. Topology Scheduling on Platform LSF

Topology Scheduling on Platform LSF

For a highly parallel job that span across multiple hosts, it is desirable to allocate hosts to the job that are close together according to network topology. The purpose is to minimize communication latency.

The article is taken from IBM Platform LSF Wiki “Using Compute units for Topology Scheduling

Step 1: Define COMPUTE_UNIT_TYPES in lsb.params

COMPUTE_UNIT_TYPES = enclosure! switch rack
  1. The example specifies 3 CU Types. In this parameter, the order of the values corresponds to levels in the network topology. CU Type enclosure are contained in CU Type switch; CU Type rack
  2. The exclamation mark (!) following switch means that this is the default level to be used for jobs with CU topology requirements.  If the exclamation mark is omitted, the first string listed is the default type.

Step 2:  Arrange hosts into lsb.hosts

Begin ComputeUnit
NAME    TYPE            CONDENSE        MEMBER
en1-1   enclosure        Y                   (c00 c01 c02)
en1-2   enclosure        Y                   (c03 c04 c05)
en1-3   enclosure        Y                   (c06 c07 co8 c09 c10)
.....
s1      switch           Y                   (en1-1 en1-2)
s2      switch           Y                   (en1-3)
.....
r1      rack             Y                   (s1 s2)
.....
End ComputeUnit

Update the mbatchd by doing a

# badmin reconfig

View the CU Configuration

# bmgroup -cu

Step 3: Using bhosts to display information

Since you are using “Y” under the CONDENSE Column in lsb.params, the bhosts display the CU type. But if you do a bhosts -X, you will see all the nodes.

References:

  1. Using Compute Units for Topology Scheduling

Basic Configuration for Platform Application Centre 10.1

You have to install Platform LSF 10.1 first. Please read Basic Configuration of Platform LSF 10.1

Step 1: Unpack the Platform Appliction Centre

# tar -zxvf pac10.1_standard_linux-x64.tar.Z
# cd pac10.1_standard_linux-x64

Please go to the installation directory, go to $LSF_INSTALL/lsfshpc10.1-x86_64/pac/pac10.1_standard_linux-x64

Step 2: Yum install the mysql

# yum install mysql mysql-server mysql-connector-java

Step 2a: Configure the How to Install MySQL on CentOS 6

Step 3: Edit the pacinstall.sh

export MYSQL_JDBC_DRIVER_JAR="/usr/share/java/mysql-connector-java-5.1.17.jar" (Line 84)

Step 4: Complete the installation

Step 4a: Enable perfmom in your LSF Cluster
Optional. Enable perfmon in your LSF cluster to see the System Services Dashboard in IBM Spectrum LSF Application Center.

# badmin perfmon start
# badmin perfmon view

Step 4b: Set the IBM Spectrum LSF Application Center environment

# cp /opt/pac/profile.platform /etc/profile.d/pac1.sh
# source /etc/profile.d/pac1.sh

Step 4c: Start IBM Spectrum LSF Application Center services.

# perfadmin start all
# pmcadmin start

Step 4d: Check services have started.

# perfadmin list
# pmcadmin list

You can see the WEBGUI, jobdt, plc, purger, and PNC services started.

Step 5: Log in to IBM Spectrum LSF Application Center.

Browse to the web server URL and log in to the IBM Spectrum LSF Application Center with the IBM Spectrum LSF administrator name and password.

Step 5a Importing the cacert.pem certificate into the client browser

Step 6: Platform URL
When HTTPS is enabled, the web server URL is: https://host_name:8443/platform

 

References:

  1. IBM Spectrum LSF Application Center V10.1 documentation