Configure PBS not to accept jobs that will run into Scheduled Down-Time

Step 1: Go to /pbs/pbs_home/sched_priv and edit the file dedicated_time

# vim /pbs/pbs_home/sched_priv/dedicated_time

Edit the Start and End Date Time in the given format

# FORMAT: FROM TO
# ---- --
# MM/DD/YYYY HH:MM MM/DD/YYYY HH:MM
For example

01/08/2020 08:00 01/08/2020 20:00

Step 2: Reload the pbs configuration by sending a SIGHUP

# ps -eaf | grep -i pbs_sched
# kill -HUP 438652

Step 3: Submit a job that cross over the scheduled date and time, you should see

$ qstat -asn1
55445.hpc-mn1 user1 q32 MPI2 -- 3 96 -- 120:0 Q -- --Not Running: Job would cross dedicated time boundary
55454.hpc-mn1 user2 q32 MPI -- 1 4 -- 120:0 Q -- --Not Running: Job would cross dedicated time boundary
55455.hpc-mn1 user1 q32 MPI -- 1 4 -- 120:0 Q -- --Not Running: Job would cross dedicated time boundary
.....
.....

Displaying node level source summary

P1: To view Node Level Source Summary like bhosts in Platform LSF

# pbsnodes -aSn
n003 job-busy 1 1 0 377gb/377gb 0/32 0/0 0/0 14654
n004 job-busy 1 1 0 377gb/377gb 0/32 0/0 0/0 14661
n005 free 9 9 0 346gb/346gb 21/32 0/0 0/0 14570,14571,14678,14443,14608,14609,14444,14678,14679
n006 job-busy 1 1 0 77gb/377gb 0/32 0/0 0/0 14681
n008 job-busy 1 1 0 77gb/377gb 0/32 0/0 0/0 14681
n009 job-busy 1 1 0 77gb/377gb 0/32 0/0 0/0 14681
n010 job-busy 1 1 0 377gb/377gb 0/32 0/0 0/0 14665
n012 job-busy 1 1 0 77gb/377gb 0/32 0/0 0/0 14681
n013 job-busy 1 1 0 77gb/377gb 0/32 0/0 0/0 14681
n014 job-busy 1 1 0 77gb/377gb 0/32 0/0 0/0 14681
n015 job-busy 1 1 0 77gb/377gb 0/32 0/0 0/0 14681
n007 free 0 0 0 377gb/377gb 32/32 0/0 0/0 --
n016 job-busy 1 1 0 77gb/377gb 0/32 0/0 0/0 14681
n017 job-busy 1 1 0 377gb/377gb 0/32 0/0 0/0 14676
n018 job-busy 1 1 0 377gb/377gb 0/32 0/0 0/0 14677

P2: To View Node Level Summary with explanation via qstat

# qstat -ans | less
Req'd Req'd Elap
Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time
--------------- -------- -------- ---------- ------ --- --- ------ ----- - -----
40043.hpc-mn1 chunfei0 iworkq Ansys 144867 1 1 256mb 720:0 R 669:1
r001/11
Job run at Mon Oct 21 at 15:30 on (r001:ncpus=1:mem=262144kb:ngpus=1)
40092.hpc-mn1 e190013 iworkq Ansys 155351 1 1 256mb 720:0 R 667:0
r001/13
Job run at Mon Oct 21 at 17:41 on (r001:ncpus=1:mem=262144kb:ngpus=1)
42557.mn1 i180004 q32 LAMMPS -- 1 48 -- 72:00 Q --
--
Not Running: Insufficient amount of resource: ncpus (R: 48 A: 14 T: 2272)
42941.mn1 hpcsuppo iworkq Ansys 255754 1 1 256mb 720:0 R 290:2
hpc-r001/4
Job run at Wed Nov 06 at 10:18 on (r001:ncpus=1:mem=262144kb:ngpus=1)
Req'd Req'd Elap
Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time
--------------- -------- -------- ---------- ------ --- --- ------ ----- - -----
40043.mn1 chunfei0 iworkq Ansys 144867 1 1 256mb 720:0 R 669:1
hpc-r001/11
Job run at Mon Oct 21 at 15:30 on (r001:ncpus=1:mem=262144kb:ngpus=1)
40092.hpc-mn1 e190013 iworkq Ansys 155351 1 1 256mb 720:0 R 667:0
hpc-r001/13
Job run at Mon Oct 21 at 17:41 on r001:ncpus=1:mem=262144kb:ngpus=1)
42557.hpc-mn1 i180004 q32 LAMMPS -- 1 48 -- 72:00 Q --
--
Not Running: Insufficient amount of resource: ncpus (R: 48 A: 14 T: 2272)
42941.mn1 hpcsuppo iworkq Ansys 255754 1 1 256mb 720:0 R 290:2
hpc-r001/4
Job run at Wed Nov 06 at 10:18 on (r001:ncpus=1:mem=262144kb:ngpus=1)
....
....
....

Adding New Nodes under PBS Professional

Step 1: Copy /etc/pbs.conf from any of the existing node to /etc of the new node

# scp -v /etc/pbs.conf root@remotenode:/etc

Step 2: Install Pbs execution rpm on the new node

# rpm -Uvh /usr/local/software/admin/altair/PBSPro_14.2.5/pbspro-execution-14.2.5.20180221140231-0.el7.x86_64.rpm

Step 3: Copy /var/spool/pbs/mom_priv/config from any existing node to /var/spool/pbs/mom_priv of new node

# scp -v /var/spool/pbs/mom_priv/config root@remotenode:/var/spool/pbs/mom_priv/

Step 4. “service pbs restart “ on the new node

# service pbs restart

Step 5. Create the node on PBS-Server

# qmgr -c “create node node-name"