Webinar – A Dynamic Self-Aware Approach to Cybersecurity by Erol Gelenbe

Title: A Dynamic Self-Aware Approach to Cybersecurity by Erol Gelenbe

This presentation will argue that cyberattacks impair not just security but also Quality of Service,
and that they increase Energy Consumption in Systems and Networks. Thus not only do they cause damage to the users of a system, but they also impair its reputation and trust, and increase its operating costs. We will also take the view that these are dynamic phenomena which take place unexpectedly. Therefore future systems will have to constantly observe their own state to be able to very rapidly react to dynamic attacks. We will suggest a Self-Aware approach to dynamically respond to cyberattacks based on the Cognitive Packet Network dynamic routing algorithm that uses Recurrent Random Neural Networks and Reinforcement Learning. Illustrations will be provided from two FP7 and H2020 that I have proposed and which were funded by the European Union

 

RMC Commands to trouble-shoot HPE SuperDome Flex

1. Display current complex information

RMC cli> show complex

2. Display current attached chassis

RMC cli> show chassis list

3. Display current health status

RMC cli> show health

4. Display current partition inside the server

RMC cli> show npar

5. Display components that have been indicted

RMC cli> show indict

6. View event in Core Analysis Engine

RMC cli> show cae

7. Acquit indicted hardware

RMC cli> acquit all

8. To Power On partition

RMC cli> power on

9. To Power Off partition

RMC cli> power off

 

Compiling OpenFOAM-7 with Third-Party-7 with Intel MPI on CentOS 7

Step 1: Create a Directory git clone OpenFOAM-7 and Third-Party-7

# mkdir -p /usr/local/OpenFOAM
# cd OpenFOAM
# git clone https://github.com/OpenFOAM/OpenFOAM-7.git
# git clone https://github.com/OpenFOAM/ThirdParty-7.git

Step 2a: Load Intel Compilers. I loaded the Intel Parallel Cluster Suite 2018

# source /usr/local/intel/2018u3/bin/compilervars.sh intel64
# source /usr/local/intel/2018u3/mkl/bin/mklvars.sh intel64
# source /usr/local/intel/2018u3/impi/2018.3.222/bin64/mpivars.sh intel64
# source /usr/local/intel/2018u3/parallel_studio_xe_2018/bin/psxevars.sh intel64
# export MPI_ROOT=/usr/local/intel/2018u3/impi/2018.3.222/intel64

Step 2b: Create softlink for include64 and lib64 for Intel MPI

# cd /usr/local/intel/2018u3/impi/2018.3.222/intel64
# ln -s include include64
# ln -s lib lib64

Step 3: Edit the OpenFOAM bashrc

# vim /usr/local/OpenFOAM/OpenFOAM-7/etc/bashrc
......
export FOAM_INST_DIR=$HOME/$WM_PROJECT
.....
#- Compiler:
# WM_COMPILER = Gcc | Gcc48 ... Gcc62 | Clang | Icc
export WM_COMPILER=Icc
.....
#- MPI implementation:
# WM_MPLIB = SYSTEMOPENMPI | OPENMPI | SYSTEMMPI | MPICH | MPICH-GM | HPMPI
# | MPI | FJMPI | QSMPI | SGIMPI | INTELMPI
export WM_MPLIB=INTELMPI
.....

Step 4: Edit the ThirdParty-7 scotch_6.0.6 Packages

$ cd /usr/local/OpenFOAM/ThirdParty-7/scotch_6.0.9/src
$ vim /usr/local/OpenFOAM/ThirdParty-7/scotch_6.0.9/src/Makefile.inc
AR = icc
...
CCS = icc
CCP = mpicc
...
CFLAGS = $(WM_CFLAGS) -O3 -DCOMMON_FILE_COMPRESS_GZ -DCOMMON_RANDOM_FIXED_SEED -
DSCOTCH_RENAME -Drestrict=__restrict -I$(MPI_ROOT)/include64 -L$(MPI_ROOT)/lib64

Step 5: Go back to OpenFOAM source directory

# cd /usr/local/OpenFOAM/OpenFOAM-7
# ./Allwmake -j 16 | tee Allwmake.log

Step 6: Testing the OpenFOAM

# ./usr/local/OpenFOAM/OpenFOAM-7/bin/foamInstallationTest

 

Altair Webinar – End to end HPC from home with Altair Access – Run, Visualize and Manage files.

Altair Access provides a simple, powerful, and consistent interface for submitting and monitoring jobs on remote clusters, clouds, and other resources, allowing engineers and researchers to focus on core activities and spend less time learning how to run applications and moving data around.

Live Webinar
Thur, April 23rd
11:00 AM – 12:00 PM SGT | 01:00 PM – 02:00 PM AEST
Click here to Register

Altair

Click Here for the Agenda

Who should attend:
HPC engineers, scientists and administrators who would like to access HPC from anywhere with ease of use. Anyone who has interest in learning High Performance Computing.

Allocating more GPU chunks for a GPU Nodes in PBS Professional

Check for the Visualisation Node configuration

# qmgr -c " p n VizSvr1"

1. At the Node Configuration at PBS-Professional, the GPU Chunk (“ngpus”) is 10.

#
# Create nodes and set their properties.
#
#
# Create and define node VizSvr1
#
create node VizSvr1
set node VizSvr1 state = free
set node VizSvr1 resources_available.allows_container = False
set node VizSvr1 resources_available.arch = linux
set node VizSvr1 resources_available.host = VizSvr1
set node VizSvr1 resources_available.mem = 791887872kb
set node VizSvr1 resources_available.ncpus = 24
set node VizSvr1 resources_available.ngpus = 10
set node VizSvr1 resources_available.vnode = VizSvr1
set node VizSvr1 queue = iworkq
set node VizSvr1 resv_enable = True

2. At the Queue Level, notice that the gpu chunk (“ngpus”) is 10 and cpu-chunk is 2

[root@scheduler1 ~]# qmgr
Max open servers: 49
Qmgr: p q iworkq
#
# Create queues and set their attributes.
#
#
# Create and define queue iworkq
#
create queue iworkq
set queue iworkq queue_type = Execution
set queue iworkq Priority = 150
set queue iworkq resources_max.ngpus = 10
set queue iworkq resources_min.ngpus = 1
set queue iworkq resources_default.arch = linux
set queue iworkq resources_default.place = free
set queue iworkq default_chunk.mem = 512mb
set queue iworkq default_chunk.ncpus = 2
set queue iworkq enabled = True
set queue iworkq started = True

2a. Configure at the Queue Level: Increase More GPU Chunk so that more users can use. Similarly, lower the CPU Chunk to spread our among the con-current session

Qmgr: set queue iworkq resources_max.ngpus = 20
Qmgr: set queue iworkq default_chunk.ncpus = 1
Qmgr: p q iworkq

2b. Configure at the Node Level: Increase the GPU Chunk at the node level to the number you use at the Queue Level. Make sure the number is the same.

Qmgr: p n hpc-r001
#
# Create nodes and set their properties.
#
#
# Create and define node VizSvr1
#
create node VizSvr1
set node VizSvr1 state = free
set node VizSvr1 resources_available.allows_container = False
set node VizSvr1 resources_available.arch = linux
set node VizSvr1 resources_available.host = VizSvr1
set node VizSvr1 resources_available.mem = 791887872kb
set node VizSvr1 resources_available.ncpus = 24
set node VizSvr1 resources_available.ngpus = 10
set node VizSvr1 resources_available.vnode = VizSvr1
set node VizSvr1 queue = iworkq
set node VizSvr1 resv_enable = True
Qmgr: set node hpc-r001 resources_available.ngpus = 20
Qmgr: q

Can verify by logging more session and testing it

[root@VizSvr1 ~]# qstat -ans | grep iworkq
94544.VizSvr1 user1 iworkq xterm 268906 1 1 256mb 720:0 R 409:5
116984.VizSvr1 user1 iworkq Abaqus 101260 1 1 256mb 720:0 R 76:38
118478.VizSvr1 user2 iworkq Ansys 236421 1 1 256mb 720:0 R 51:37
118487.VizSvr1 user3 iworkq Ansys 255657 1 1 256mb 720:0 R 49:51
119676.VizSvr1 user4 iworkq Ansys 308767 1 1 256mb 720:0 R 41:40
119862.VizSvr1 user5 iworkq Matlab 429798 1 1 256mb 720:0 R 23:54
120949.VizSvr1 user6 iworkq Ansys 450449 1 1 256mb 720:0 R 21:12
121229.VizSvr1 user7 iworkq xterm 85917 1 1 256mb 720:0 R 03:54
121646.VizSvr1 user8 iworkq xterm 101901 1 1 256mb 720:0 R 01:57
121664.VizSvr1 user9 iworkq xterm 111567 1 1 256mb 720:0 R 00:01
121666.VizSvr1 user9 iworkq xterm 112374 1 1 256mb 720:0 R 00:00