Install and Enable EPEL Repository for CentOS 7.x

The EPEL is an acronym for Extra Packages for Enterprise Linux. The EPEL repository used by the following Linux Distributions:

  • Red Hat Enterprise Linux (RHEL)
  • CentOS
  • Oracle Linux

On the Terminal,

Install EPEL Repository

# yum -y install epel-release

Refresh EPEL Repository

# yum repolist

Install Packages from EPEL Repository

# yum install -y htop

Search and install Package (E.g. htop)

# yum --disablerepo="*" --enablerepo="epel" list available | grep 'htop'

CentOS Replacement Rocky Linux Is Now in GA and Under Independent Control

The Rocky Enterprise Software Foundation (RESF) is announcing the general availability of Rocky Linux, release 8.4, designed as a drop-in replacement for the soon-to-be discontinued CentOS. The GA release is launching six-and-a-half months after Red Hat deprecated its support for the widely popular, free CentOS server operating system.

HPC Wire 21-June-2021 ()

For more information, do take a look at CentOS Replacement Rocky Linux is now in GA and Under Independent Control

Do take a look at Rocky Linux site

Finding physical cpus, cores and logical cpus

Number of Active Physical Processors

% grep physical.id /proc/cpuinfo | sort -u | wc -l
8

Number of cores per CPU

% grep cpu.cores /proc/cpuinfo | sort -u
cpu cores       : 26

Therefore the total number of cores is
8×26 = 208 cores

Number of Logical Processors

The number of cores seen by the Linux. Since the Server has switched on Hyperthreading

% grep processor /proc/cpuinfo | wc -l
416

References:

  1. How to find the number of physical cpus, cpu cores, and logical cpus

Understanding Load Average in Linux

Taken from RedHat Article “What is the relation between I/O wait and load average?” I have learned quite a bit on this article.

Linux, unlike traditional UNIX operating systems, computes its load average as the average number of runnable or running processes (R state), and the number of processes in uninterruptable sleep (D state) over the specified interval. On UNIX systems, only the runnable or running processes are taken into account for the load average calculation.

On Linux the load average is a measurement of the amount of “work” being done by the machine (without being specific as to what that work is). This “work” could reflect a CPU intensive application (compiling a program or encrypting a file), or something I/O intensive (copying a file from disk to disk, or doing a database full table scan), or a combination of the two.

In the article, you can determine whether the high load average is the result processes in the running state or uninterruptible state,

I like this script…… that was written in the knowledgebase. The script show the running, blocked and runnin+blocked.

[user@node1 ~]$ while true; do echo; uptime; ps -efl | awk 'BEGIN {running = 0; blocked = 0} $2 ~ /R/ {running++}; $2 ~ /D/ {blocked++} END {print "Number of running/blocked/running+blocked processes: "running"/"blocked"/"running+blocked}'; sleep 5; done

 23:45:52 up 52 days,  7:06, 22 users,  load average: 1.40, 1.26, 1.02
Number of running/blocked/running+blocked processes: 3/1/4

 23:45:57 up 52 days,  7:06, 22 users,  load average: 1.45, 1.27, 1.02
Number of running/blocked/running+blocked processes: 2/0/2

 23:46:02 up 52 days,  7:06, 22 users,  load average: 1.41, 1.27, 1.02
Number of running/blocked/running+blocked processes: 1/1/2

 23:46:07 up 52 days,  7:07, 22 users,  load average: 1.46, 1.28, 1.03
Number of running/blocked/running+blocked processes: 2/0/2

 23:46:12 up 52 days,  7:07, 22 users,  load average: 1.42, 1.27, 1.03
Number of running/blocked/running+blocked processes: 2/0/2

 23:46:17 up 52 days,  7:07, 22 users,  load average: 1.55, 1.30, 1.04
Number of running/blocked/running+blocked processes: 2/0/2

 23:46:22 up 52 days,  7:07, 22 users,  load average: 1.51, 1.30, 1.04
Number of running/blocked/running+blocked processes: 1/1/2

 23:46:27 up 52 days,  7:07, 22 users,  load average: 1.55, 1.31, 1.05
Number of running/blocked/running+blocked processes: 2/0/2

 23:46:32 up 52 days,  7:07, 22 users,  load average: 1.62, 1.33, 1.06
Number of running/blocked/running+blocked processes: 2/1/3

 23:46:38 up 52 days,  7:07, 22 users,  load average: 1.81, 1.38, 1.07
Number of running/blocked/running+blocked processes: 1/1/2

 23:46:43 up 52 days,  7:07, 22 users,  load average: 1.66, 1.35, 1.07
Number of running/blocked/running+blocked processes: 1/0/1

 23:46:48 up 52 days,  7:07, 22 users,  load average: 1.53, 1.33, 1.06
Number of running/blocked/running+blocked processes: 1/0/1

Another useful way to typical top output when the load average is high (filter the idle/sleep status tasks with i). So the high load average is because lots of sendmail tasks are in D status. They may be waiting either for I/O or network.

op - 13:23:21 up 329 days,  8:35,  0 users,  load average: 50.13, 13.22, 6.27
Tasks: 437 total,   1 running, 435 sleeping,   0 stopped,   1 zombie
Cpu(s):  0.1%us,  1.5%sy,  0.0%ni, 93.6%id,  4.5%wa,  0.1%hi,  0.2%si,  0.0%st
Mem:  34970576k total, 24700568k used, 10270008k free,  1166628k buffers
Swap:  2096440k total,        0k used,  2096440k free, 11233868k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND            
11975 root      15   0 13036 1356  820 R  0.7  0.0   0:00.66 top                
15915 root      17   0  5312  872   80 D  0.0  0.0   0:00.00 sendmail           
15918 root      17   0  5312  872   80 D  0.0  0.0   0:00.00 sendmail           
15920 root      17   0  5312  872   80 D  0.0  0.0   0:00.00 sendmail           
15921 root      17   0  5312  872   80 D  0.0  0.0   0:00.00 sendmail           
15922 root      17   0  5312  872   80 D  0.0  0.0   0:00.00 sendmail           
15923 root      17   0  5312  872   80 D  0.0  0.0   0:00.00 sendmail           
15924 root      17   0  5312  872   80 D  0.0  0.0   0:00.00 sendmail           
15926 root      17   0  5312  872   80 D  0.0  0.0   0:00.00 sendmail           
15928 root      17   0  5312  872   80 D  0.0  0.0   0:00.00 sendmail           
15929 root      17   0  5312  872   80 D  0.0  0.0   0:00.00 sendmail           
15930 root      17   0  5312  872   80 D  0.0  0.0   0:00.00 sendmail           
15931 root      18   0  5312  872   80 D  0.0  0.0   0:00.00 sendmail           

References:

  1. What is the relation between I/O wait and load average?

Error “Too many files open” on CentOS 7

If you are encountering Error messages during login with “Too many open files” and the session gets terminated automatically, it is because the open file limit for a user or system exceeds the default setting and  you may wish to change it

@ System Levels

To see the settings for maximum open files,

# cat /proc/sys/fs/file-max
55494980

This value means that the maximum number of files all processes running on the system can open. By default this number will automatically vary according to the amount of RAM in the system. As a rough guideline it will be about 100,000 files per GB of RAM.

 

To override the system wide maximum open files, as edit the /etc/sysctl.conf

# vim /etc/sysctl.conf
 fs.file-max = 80000000

Activate this change to the live system

# sysctl -p

@ User Level

To see the setting for maximum open files for a user

# su - user1
$ ulimit -n
1024

To change the setting, edit the /etc/security/limits.conf

$ vim /etc/security/limits.conf
user - nofile 2048

To change for all users

* - nofile 2048

This set the maximum open files for ALL users to 2048 files. These settings will require a reboot to activate.

References:

  1. How to correct the error “Too many files open” on Red Hat Enterprise Linux

Tools to Show your System Configuration

Tool 1: Display Information about CPU Architecture

[user1@node1 ~]$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6134 CPU @ 3.20GHz
Stepping: 4
CPU MHz: 3200.000
BogoMIPS: 6400.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 25344K
NUMA node0 CPU(s): 0-7,16-23
NUMA node1 CPU(s): 8-15,24-31
.....
.....

Tool 2: List all PCI devices

[user1@node1 ~]# lspci -t -vv
-+-[0000:d7]-+-00.0-[d8]--
| +-01.0-[d9]--
| +-02.0-[da]--
| +-03.0-[db]--
| +-05.0 Intel Corporation Device 2034
| +-05.2 Intel Corporation Sky Lake-E RAS Configuration Registers
| +-05.4 Intel Corporation Device 2036
| +-0e.0 Intel Corporation Device 2058
| +-0e.1 Intel Corporation Device 2059
| +-0f.0 Intel Corporation Device 2058
| +-0f.1 Intel Corporation Device 2059
| +-10.0 Intel Corporation Device 2058
| +-10.1 Intel Corporation Device 2059
| +-12.0 Intel Corporation Sky Lake-E M3KTI Registers
| +-12.1 Intel Corporation Sky Lake-E M3KTI Registers
| +-12.2 Intel Corporation Sky Lake-E M3KTI Registers
| +-12.4 Intel Corporation Sky Lake-E M3KTI Registers
| +-12.5 Intel Corporation Sky Lake-E M3KTI Registers
| +-15.0 Intel Corporation Sky Lake-E M2PCI Registers
| +-16.0 Intel Corporation Sky Lake-E M2PCI Registers
| +-16.4 Intel Corporation Sky Lake-E M2PCI Registers
| \-17.0 Intel Corporation Sky Lake-E M2PCI Registers
.....
.....

Tool 3: List all PCI devices

[user@node1 ~]# lsblk
sda 8:0 0 1.1T 0 disk
├─sda1 8:1 0 200M 0 part /boot/efi
├─sda2 8:2 0 1G 0 part /boot
└─sda3 8:3 0 1.1T 0 part
├─centos-root 253:0 0 50G 0 lvm /
├─centos-swap 253:1 0 4G 0 lvm [SWAP]
└─centos-home 253:2 0 1T 0 lvm

Tool 4: See flags kernel booted with

[user@node1 ~]
BOOT_IMAGE=/vmlinuz-3.10.0-693.el7.x86_64 root=/dev/mapper/centos-root ro crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet LANG=en_US.UTF-8

Tool 5: Display available network interfaces

[root@hpc-gekko1 ~]# ifconfig -a
eno1: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
ether XX:XX:XX:XX:XX:XX txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
.....
.....
.....

Tool 6 Using dmidecode to find hardware information
See Using dmidecode to find hardware information

How to configure NFS on CentOS 7

Step 1: Do a Yum Install

# yum install nfs-utils rpcbind

Step 2: Enable the Service at Boot Time

# systemctl enable nfs-server
# systemctl enable rpcbind
# systemctl enable nfs-lock     (it does not need to be enabled since rpc-statd.service  is static.)
# systemctl enable nfs-idmap    (it does not need to be enabled since nfs-idmapd.service is static.)

Step 3: Start the Services

# systemctl start rpcbind
# systemctl start nfs-server
# systemctl start nfs-lock
# systemctl start nfs-idmap

Step 4: Confirm the status of NFS

# systemctl status nfs

Step 5: Create a mount point

# mkdir /shared-data

Step 6: Exports the Share

# vim /etc/exports
/shared-data 192.168.0.0/16(rw,no_root_squash)

Step 7: Export the Share

# exportfs -rv

Step 8: Restart the NFS Services

# systemctl restart nfs-server

Step 9: Configure the Firewall

# firewall-cmd --add-service=nfs --zone=internal --permanent
# firewall-cmd --add-service=mountd --zone=internal --permanent
# firewall-cmd --add-service=rpc-bind --zone=internal --permanent

References:

  1. How to configure NFS in RHEL 7
  2. What firewalld services should be active on an NFS server in RHEL 7?

Configuring External libraries for R-Studio for CentOS 7

RStudio can be configured by adding entries to 2 configuration files. It may not exist by default and you may need to create

/etc/rstudio/rserver.conf
/etc/rstudio/rsession.conf

If you need to add additional libraries to the default LD_LIBRARY_PATH for R sessions. You have to add the parameters to the /etc/rstudio/rserver.conf

Step 1: Enter External Libraries settings
For example, you may want to add GCC-6.5 libraries

# Server Configuration File
rsession-ld-library-path=/usr/local/gcc-6.5.0/lib64:/usr/local/gcc-6.5.0/lib

Step 2a: Stop the R Server Services

# /usr/sbin/rstudio-server stop

Step 2b: Verify the Installation

# rstudio-server verify-installation

Step 2c: Start the R Server Services

# /usr/sbin/rstudio-server start
# /usr/sbin/rstudio-server status

(Make sure there is no error)

References:

  1. RStudio Server: Configuring the Server