Troubleshooting the PBS Control System and PBS Server

I was having this issue after I submitted a job. This was due to some configuration I had to do to improve security which is similar to Using the Host’s FirewallD as the Main Firewall to Secure Docker

qsub: Budget Manager: License is unverified. AM is not handling requests

To resolve the issue, I took the following Steps. On the PBS-Control Server,

Step 1: Export the Path of the AM Database.

export PATH=/opt/am/postgres/bin:$PATH

Step 2: Check that the Docker Container Services are started in the System. You may want to start the dockers to capture any errors. If the docker is not able to start up, it is likely due to the firewall settings.

# systemctl status firewalld.service.

Step 3: I restarted the PBS Altair Service

# systemctl restart altaircontrol.service

Step 4: I use the Docker Command to return an overview of all running containers

# docker ps 

At the PBS-Server, Restart the AM Control Register is working

# /opt/am/libexec/am_control_register

To Test, Submit an Interactive Job with the correct Project Code, it should work.

Allowing users to bypass PBS-Professional Scheduler to SSH directly into the Compute Node

For some special users like Adminsitrators, who needs to SSH directly into the compute instead of submitting to the scheduler with using root, you may want to do the following:

At the Compute Node

# vim /var/spool/pbs/mom_priv/config

Find the $restrict_user_exceptions

$clienthost 192.168.x.x
$clienthost 192.168.y.y
$restrict_user_maxsysid 999
$restrict_user True
$restrict_user_exceptions user1
$usecp *:/home/ /home/

Restart the PBS Service

# /etc/init.d/pbs restart

Tuning Compute Performance – Nanyang Technological University Targets I/O Bottlenecks to Speed Up Research

A customer case study writeup on how the HPC Team at Nanyang Technological University used Altair Mistral to tune Compute Performance.

The High Performance Computing Centre (HPCC) at Nanyang Technological University Singapore supports the university’s large-scale and data-intensive computing needs, and resource requirements continue to grow. HPCC churned out nearly 19 million core CPU-hours and nearly 300,000 GPU-hours in 2021 to enable more than 160 NTU researchers. HPCC’s small, four-engineer team turned to Altair for cutting-edge tools to help support their growing user community and evaluate scaling up to a hybrid cloud environment. They needed job-level insights to understand runtime issues; metrics on I/O, CPU, and memory to identify bottlenecks; and the ability to detect problematic applications and rogue jobs with bad I/O patterns that could overload shared storage. The HPCC team deployed Altair Mistral™ to profile application I/O and determine the most efficient options to optimize HPC at NTU.

Tuning Compute Performance – Nanyang Technological University Targets I/O Bottlenecks to Speed Up Research

Application I/O Profiling on HPC Clusters with Altair Mistral and Altair PBS Professional

A Paper has been published by Altair and myself on the “Application I/O Profiling on HPC Clusters with Altair Mistral and Altair PBS Professional”. For more information, do take a look at

The High Performance Computing Centre (HPCC) at Nanyang Technological University (NTU) Singapore employs the latest techniques to ensure good system utilization and a high-performance user experience. The university has a large HPC cluster with the Altair® PBS Professional® workload manager, and the HPCC team installed Altair Mistral™ to monitor application I/O and storage performance. In this paper, we describe how they used Mistral to analyze an HPC application. After getting some insights into the application, they profiled it against HPCC’s three storage tiers and gained detailed insights into application I/O patterns and storage performance.

Application I/O Profiling on HPC Clusters with Altair Mistral and Altair PBS Professional

PBS Professional MoM Access Configuration Parameters

Taken from PBS Professional Admin Guide

The Configuration Parameters can be found at /var/spool/pbs/mom_priv/config

$restrict_user <value>
  • Controls whether users not submitting jobs have access to this machine. When True, only those users running jobs are allowed access.
  • Format: Boolean
  • Default: off
$restrict_user_exceptions <user_list>
  • List of users who are exempt from access restrictions applied by $restrict_user. Maximum number of names in list is 10.
  • Format: Comma-separated list of usernames; space allowed after comma
$restrict_user_maxsysid <value>
  • Allows system processes to run when $restrict_user is enabled. Any user with a numeric user ID less than or equal to value is exempt from restrictions applied by $restrict_user.
  • Format: Integer
  • Default: 999

Example


To restrict user access to those running jobs, add:

$restrict_user True

To specify the users who are allowed access whether or not they are running jobs, add:

$restrict_user_exceptions User1, User2

To allow system processes to run, specify the maximum numeric user ID by adding:

$restrict_user_maxsysid 999

Quick Fix to add Queue for PBS Pro

One of the quickest way to install the PBS Professional Queue is take 1 queue and modify the example

At your node holding the PBS Scheduler

# qmgr -c "print queue @default"
.....
.....
# Create and define queue q64
#
create queue q64
set queue q64 queue_type = Execution
set queue q64 Priority = 100
set queue q64 resources_max.ncpus = 256
set queue q64 resources_max.walltime = 500:00:00
set queue q64 resources_default.charge_rate = 0.04
set queue q64 default_chunk.Qlist = q64
set queue q64 max_run_res.ncpus = [u:PBS_GENERIC=256]
set queue q64 enabled = True
set queue q64 started = True
#
.....
.....

Copy out the information and pipe it into a file

# qmgr -c "print queue q64" > q64_new_queue

Edit the File and save it

# Create and define queue q64_new_queue
#
create queue q64_new_queue
#
set queue q64_new_queue queue_type = Execution
set queue q64_new_queue Priority = 100
set queue q64_new_queue resources_max.ncpus = 256
set queue q64_new_queue resources_max.walltime = 500:00:00
set queue q64_new_queue resources_default.charge_rate = 0.04
set queue q64_new_queue default_chunk.Qlist = q64
set queue q64_new_queue max_run_res.ncpus = [u:PBS_GENERIC=256]
set queue q64_new_queue enabled = True
set queue q64_new_queue started = True
#

Pipe it back to qmgr

# qmgr < q64_new_queue

You should be able to see the new queue

...
...

# Create and define queue q64_new_queue
#
create queue q64_new_queue
#
set queue q64_new_queue queue_type = Execution
set queue q64_new_queue Priority = 100
set queue q64_new_queue resources_max.ncpus = 256
set queue q64_new_queue resources_max.walltime = 500:00:00
set queue q64_new_queue resources_default.charge_rate = 0.04
set queue q64_new_queue default_chunk.Qlist = q64
set queue q64_new_queue max_run_res.ncpus = [u:PBS_GENERIC=256]
set queue q64_new_queue enabled = True
set queue q64_new_queue started = True
#
...
...

Running Arrays on PBS Professional

If you are intending to run the same program with the different input files, it is best you use Jobs Array instead of creating separate programs for the input files which is tedious. It is very easy

Amending the Submission Scripts (Part 1)

To create an arrays jobs, you have to use the -J option on the PBS Scripts. For 10 sub-jobs, you do the following

#PBS -J 1-10

Amending the Submission Scripts (Part 2)

If your input files are concatenated with a running number. For example, if your input file is data1.gjf, data2.gjf, data3.gjf, data4.gjf, data5.gjf ….. data10.gjf

inputfile=data$PBS_ARRAY_INDEX.gjf

Submitting the Jobs

a. To submit the jobs, just

% qsub yoursubmissionscript.pbs

Checking Jobs

b. You will notice that after you qstat, you will notice that your jobs bas a “B”

% qstat -u user1
544198[].node1 Gaussian-09e user1 0 B q32

You have to do a “-t” or “-Jt”

% qstat -t 544198[]

% qstat -t 544198[]
Job id Name User Time Use S Queue
---------------- ---------------- ---------------- -------- - -----
544198[].node1 Gaussian-09e user1 0 B q32
544198[54].node1 Gaussian-09e user1 00:40:21 R q32
544198[55].node1 Gaussian-09e user1 00:15:25 R q32

To delete the Sub Jobs

% qdel "544198[5]"

Basic Tracing of Jobs Issues in PBS Professional

Step 1: Proceed to the Head Node (Scheduler)

Once you have the Job ID you wish to investigate, go to the Head Node and do. The “-n” is the number of days to search logs at

% tracejob -n 10 jobID

From the tracejob, you will be able to take a peek which node the job landed. Next you can go the node in question and find information from the mom_logs

% vim /var/spool/pbs/mom_logs/thedateyouarelookingat

For example,

% vim /var/spool/pbs/mom_logs/20201211

Using Vim, search for the Job ID

? yourjobID

You should be able to get a good hint of what has happened. In my case is that my nvidia drivers are having issues.