Setting up NTP Client in Rocky Linux 8.5

Prerequisites Step 1: Endure you are in the correct time zone

# timedatectl
               Local time: Wed 2022-04-20 10:04:44 +08
           Universal time: Wed 2022-04-20 02:04:44 UTC
                 RTC time: Wed 2022-04-20 02:04:44
                Time zone: Asia/Singapore (+08, +0800)
System clock synchronized: no
              NTP service: active
          RTC in local TZ: no

Prerequisites Step 2: List Time Zone

# timedatectl list-timezones
.....
Asia/Singapore
.....

Prerequisites Step 3: Set Time Zone

# timedatectl set-timezone Asia/Singapore

In Rocky Linux 8.5, the ntp package is no longer supported and it is implemented by the chronyd (a daemon that runs in user-space) which is provided in the chrony package.

chrony works both as an NTP server and as an NTP client, which is used to synchronize the system clock with NTP servers.

To install the chrony suite, use the DNF Package Manager.

# dnf install chrony

Enable the Service

# systemctl start chronyd
# systemctl status chronyd
# systemctl enable chronyd

Check it is synchronised

[root@h00 etc]# timedatectl
               Local time: Wed 2022-04-20 10:19:56 +08
           Universal time: Wed 2022-04-20 02:19:56 UTC
                 RTC time: Wed 2022-04-20 02:19:56
                Time zone: Asia/Singapore (+08, +0800)
System clock synchronized: yes
              NTP service: active
          RTC in local TZ: no

Setting up NTP Client Using Chrony in Rocky Linux 8.5

# vim /etc/chrony.conf
.....
pool sg.pool.ntp.org iburst
.....
# systemctl restart chronyd

Show the current time sources that chronyd is accessing

# chronyc sources
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^? 178.128.223.142               0   6     0     -     +0ns[   +0ns] +/-    0ns
.....
.....
.....

References:

Comparison between the /etc/os-release of RHEL-8.5 and Rocky Linux 8.5

Rocky Linux is a production-ready downstream version of Red Hat Enterprise Linux started by original founder of CentOS, Gregory Kurtzer. the OS is almost identical under intensive development by the community.

For RHEL-8.5,

NAME="Red Hat Enterprise Linux"
VERSION="8.5 (Green Obsidian)"
ID="rhel"
ID_LIKE=”fedora"
VERSION_ID=”8.5"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Red Hat Enterprise Linux 8.5 (Green Obsidian)"
ANSI_COLOR="0;31"
CPE_NAME=”cpe:/o:redhat:enterprise_linux:8.5:GA”
HOME_URL=”https://www.redhat.com/”
DOCUMENTATION_URL=”https://access.redhat.com/documentation/red_hat_enterprise_linux/8/”
BUG_REPORT_URL=”https://bugzilla.redhat.com/”

For Rocky-Linux-8.5,

NAME="Rocky Linux"
VERSION="8.5 (Green Obsidian)"
ID="rocky"
ID_LIKE="rhel centos fedora"
VERSION_ID="8.5"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Rocky Linux 8.5 (Green Obsidian)"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:rocky:rocky:8.5:GA"
HOME_URL="https://rockylinux.org/"
BUG_REPORT_URL="https://bugs.rockylinux.org/"
ROCKY_SUPPORT_PRODUCT="Rocky Linux"
ROCKY_SUPPORT_PRODUCT_VERSION="8"

abrtd daemon deleting recently created application core dumps on CentOS 7

When I did the command “systemctl status abrtd.service”, I’ve noticed the following

[root@node1 ~]# systemctl status abrtd.service
● abrtd.service - ABRT Automated Bug Reporting Tool
   Loaded: loaded (/usr/lib/systemd/system/abrtd.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2022-01-19 09:52:50 +08; 2 months 11 days ago
 Main PID: 1113 (abrtd)
   CGroup: /system.slice/abrtd.service
           └─1113 /usr/sbin/abrtd -d -s


Apr 01 22:29:43 node1 abrt-server[361048]: Package 'rapidfile' isn't signed with proper key
Apr 01 22:29:44 node1 abrt-server[361048]: 'post-create' on '/var/spool/abrt/ccpp-2022-04-01-22:29:29-360942' exited with 1
Apr 01 22:29:44 node1 abrt-server[361048]: Deleting problem directory '/var/spool/abrt/ccpp-2022-04-01-22:29:29-360942'
Apr 01 22:53:14 node1 abrt-server[423453]: Executable '/usr/local/intel/2019u5/intelpython3/bin/python3.6' doesn't belong to any package and ProcessUnp...et to 'no'
Apr 01 22:53:14 node1 abrt-server[423453]: 'post-create' on '/var/spool/abrt/ccpp-2022-04-01-22:52:40-420563' exited with 1
Apr 01 22:53:14 node1 abrt-server[423453]: Deleting problem directory '/var/spool/abrt/ccpp-2022-04-01-22:52:40-420563'
Apr 01 23:55:22 node1 abrt-server[432522]: '.' does not exist
Apr 01 23:55:23 node1 abrt-server[432522]: 'post-create' on '/var/spool/abrt/ccpp-2022-04-01-23:55:09-432449' exited with 1
Apr 01 23:55:23 node1 abrt-server[432522]: Deleting problem directory '/var/spool/abrt/ccpp-2022-04-01-23:55:09-432449'
Apr 01 23:55:23 node1 abrt-server[432522]: '/var/spool/abrt/ccpp-2022-04-01-23:55:09-432449' does not exist
Hint: Some lines were ellipsized, use -l to show in full.

The Issue seems to be caused by

  • The abrtd daemon deletes recently created core dumps
  • Error: Package isn’t signed with proper key

According to Red Hat Knowledge Base “Why does the abrtd daemon delete recently created application core dumps?”, the resolution can be simply

% vim /etc/abrt/abrt-action-save-package-data.conf 
OpenGPGCheck = no
ProcessUnpackaged = yes

Restart the abrtd daemon – as root – for the new settings to take effect:

# systemctl restart abrtd.service

According to Red Hat, the root cause is written as followed:
When the OpenGPGCheck variable is set to yes (the default setting), this informs ABRT to only analyse and handle crashes in applications provided by packages which are signed by the GPG keys whose locations are listed in the /etc/abrt/gpg_keys file. Setting OpenGPGCheck = no, tells ABRT to catch crashes in all programs. Also, abrt is configured to capture coredump of files installed from rpm only. Variable ‘ProcessUnpackaged’ tells abrt to keep the coredump even if application is not installed via rpm/yum.

References:

Top 10 videos from Red Hat Developer

Podman: A Linux tool for working with containers and pods Get started with Podman, an open source, Linux-based tool that builds Docker-compatible container images.

Easily secure your Spring Boot applications with Keycloak Discover how to deploy and configure a Keycloak server and then secure a Spring Boot application.

Learn how to move your existing Java app to Kubernetes—without changing a single line of code Using the free Developer Sandbox for Red Hat OpenShift, we demo how you can take your existing source code or create a new application and easily deploy and manage them as containers.

KBE Insider (E3): Luke Hinds We talk to Luke Hinds, Security Lead for Office of CTO, Red Hat, about his work on the Kubernetes Security Response Team, Sigstore, and the Kubernetes HackerOne Bug Bounty Program.

Local OpenShift environment on Windows with Red Hat CodeReady Containers Brian Tannous walks through getting a local OpenShift environment installed on Windows using Red Hat CodeReady Containers.

Securing apps and services with Keycloak authentication | DevNation Tech Talk See how to easily secure all of your applications and services, regardless of how they’re implemented and hosted, with Keycloak—all with little-to-no code required.

A deep dive into Keycloak | DevNation Tech Talk This tutorial introduces Keycloak, an open source identity and access management solution for modern applications and services.

Secure Spring Boot Microservices with Keycloak | DevNation Tech Talk In this interactive, live-coding session, you’ll explore the Spring Boot adapter provided by Keycloak.

KBE Insider (E5): Savitha Raghunathan We talk to Savitha Raghunathan, Senior Software Engineer at Red Hat, about her work and experience as an open source contributor within the Kubernetes ecosystem.

Apache Kafka + Debezium | DevNation Tech Talk This tutorial explores how to use Apache Kafka and Debezium. Learn how to use change data capture for reliable microservices integration.

Install and Enable EPEL Repository for CentOS 7.x

The EPEL is an acronym for Extra Packages for Enterprise Linux. The EPEL repository used by the following Linux Distributions:

  • Red Hat Enterprise Linux (RHEL)
  • CentOS
  • Oracle Linux

On the Terminal,

Install EPEL Repository

# yum -y install epel-release

Refresh EPEL Repository

# yum repolist

Install Packages from EPEL Repository

# yum install -y htop

Search and install Package (E.g. htop)

# yum --disablerepo="*" --enablerepo="epel" list available | grep 'htop'

CentOS Replacement Rocky Linux Is Now in GA and Under Independent Control

The Rocky Enterprise Software Foundation (RESF) is announcing the general availability of Rocky Linux, release 8.4, designed as a drop-in replacement for the soon-to-be discontinued CentOS. The GA release is launching six-and-a-half months after Red Hat deprecated its support for the widely popular, free CentOS server operating system.

HPC Wire 21-June-2021 ()

For more information, do take a look at CentOS Replacement Rocky Linux is now in GA and Under Independent Control

Do take a look at Rocky Linux site

Finding physical cpus, cores and logical cpus

Number of Active Physical Processors

% grep physical.id /proc/cpuinfo | sort -u | wc -l
8

Number of cores per CPU

% grep cpu.cores /proc/cpuinfo | sort -u
cpu cores       : 26

Therefore the total number of cores is
8×26 = 208 cores

Number of Logical Processors

The number of cores seen by the Linux. Since the Server has switched on Hyperthreading

% grep processor /proc/cpuinfo | wc -l
416

References:

  1. How to find the number of physical cpus, cpu cores, and logical cpus

Understanding Load Average in Linux

Taken from RedHat Article “What is the relation between I/O wait and load average?” I have learned quite a bit on this article.

Linux, unlike traditional UNIX operating systems, computes its load average as the average number of runnable or running processes (R state), and the number of processes in uninterruptable sleep (D state) over the specified interval. On UNIX systems, only the runnable or running processes are taken into account for the load average calculation.

On Linux the load average is a measurement of the amount of “work” being done by the machine (without being specific as to what that work is). This “work” could reflect a CPU intensive application (compiling a program or encrypting a file), or something I/O intensive (copying a file from disk to disk, or doing a database full table scan), or a combination of the two.

In the article, you can determine whether the high load average is the result processes in the running state or uninterruptible state,

I like this script…… that was written in the knowledgebase. The script show the running, blocked and runnin+blocked.

[user@node1 ~]$ while true; do echo; uptime; ps -efl | awk 'BEGIN {running = 0; blocked = 0} $2 ~ /R/ {running++}; $2 ~ /D/ {blocked++} END {print "Number of running/blocked/running+blocked processes: "running"/"blocked"/"running+blocked}'; sleep 5; done

 23:45:52 up 52 days,  7:06, 22 users,  load average: 1.40, 1.26, 1.02
Number of running/blocked/running+blocked processes: 3/1/4

 23:45:57 up 52 days,  7:06, 22 users,  load average: 1.45, 1.27, 1.02
Number of running/blocked/running+blocked processes: 2/0/2

 23:46:02 up 52 days,  7:06, 22 users,  load average: 1.41, 1.27, 1.02
Number of running/blocked/running+blocked processes: 1/1/2

 23:46:07 up 52 days,  7:07, 22 users,  load average: 1.46, 1.28, 1.03
Number of running/blocked/running+blocked processes: 2/0/2

 23:46:12 up 52 days,  7:07, 22 users,  load average: 1.42, 1.27, 1.03
Number of running/blocked/running+blocked processes: 2/0/2

 23:46:17 up 52 days,  7:07, 22 users,  load average: 1.55, 1.30, 1.04
Number of running/blocked/running+blocked processes: 2/0/2

 23:46:22 up 52 days,  7:07, 22 users,  load average: 1.51, 1.30, 1.04
Number of running/blocked/running+blocked processes: 1/1/2

 23:46:27 up 52 days,  7:07, 22 users,  load average: 1.55, 1.31, 1.05
Number of running/blocked/running+blocked processes: 2/0/2

 23:46:32 up 52 days,  7:07, 22 users,  load average: 1.62, 1.33, 1.06
Number of running/blocked/running+blocked processes: 2/1/3

 23:46:38 up 52 days,  7:07, 22 users,  load average: 1.81, 1.38, 1.07
Number of running/blocked/running+blocked processes: 1/1/2

 23:46:43 up 52 days,  7:07, 22 users,  load average: 1.66, 1.35, 1.07
Number of running/blocked/running+blocked processes: 1/0/1

 23:46:48 up 52 days,  7:07, 22 users,  load average: 1.53, 1.33, 1.06
Number of running/blocked/running+blocked processes: 1/0/1

Another useful way to typical top output when the load average is high (filter the idle/sleep status tasks with i). So the high load average is because lots of sendmail tasks are in D status. They may be waiting either for I/O or network.

op - 13:23:21 up 329 days,  8:35,  0 users,  load average: 50.13, 13.22, 6.27
Tasks: 437 total,   1 running, 435 sleeping,   0 stopped,   1 zombie
Cpu(s):  0.1%us,  1.5%sy,  0.0%ni, 93.6%id,  4.5%wa,  0.1%hi,  0.2%si,  0.0%st
Mem:  34970576k total, 24700568k used, 10270008k free,  1166628k buffers
Swap:  2096440k total,        0k used,  2096440k free, 11233868k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND            
11975 root      15   0 13036 1356  820 R  0.7  0.0   0:00.66 top                
15915 root      17   0  5312  872   80 D  0.0  0.0   0:00.00 sendmail           
15918 root      17   0  5312  872   80 D  0.0  0.0   0:00.00 sendmail           
15920 root      17   0  5312  872   80 D  0.0  0.0   0:00.00 sendmail           
15921 root      17   0  5312  872   80 D  0.0  0.0   0:00.00 sendmail           
15922 root      17   0  5312  872   80 D  0.0  0.0   0:00.00 sendmail           
15923 root      17   0  5312  872   80 D  0.0  0.0   0:00.00 sendmail           
15924 root      17   0  5312  872   80 D  0.0  0.0   0:00.00 sendmail           
15926 root      17   0  5312  872   80 D  0.0  0.0   0:00.00 sendmail           
15928 root      17   0  5312  872   80 D  0.0  0.0   0:00.00 sendmail           
15929 root      17   0  5312  872   80 D  0.0  0.0   0:00.00 sendmail           
15930 root      17   0  5312  872   80 D  0.0  0.0   0:00.00 sendmail           
15931 root      18   0  5312  872   80 D  0.0  0.0   0:00.00 sendmail           

References:

  1. What is the relation between I/O wait and load average?