abrt-cli status’ timed out is always shown when logging on or changing users

When change or login to specific user, ‘abrt-cli status’ timed out is always shown

Last login: Mon Dec 19 23:32:58 +08 2022 on pts/21 
'abrt-cli status' timed out

To resolve the issue, you may want to check the status of the ‘abrtd’ service, the output will indicate a locked file

# systemctl status abrtd
● abrtd.service - ABRT Automated Bug Reporting Tool
   Loaded: loaded (/usr/lib/systemd/system/abrtd.service; disabled; vendor preset: enabled)
   Active: active (running) since Mon 2022-12-19 23:34:58 +08; 2s ago
 Main PID: 273413 (abrtd)
   CGroup: /system.slice/abrtd.service
           └─273413 /usr/sbin/abrtd -d -s

Dec 19 23:34:58 node1 systemd[1]: Started ABRT Automated Bug Reporting Tool.
Dec 19 23:34:58 node1 systemd[1]: Starting ABRT Automated Bug Reporting Tool...
Dec 19 23:34:58 node1 abrtd[273413]: Lock file '.lock' is locked by process 191242
Dec 19 23:34:59 node1 abrtd[273413]: Lock file '.lock' is locked by process 191242
Dec 19 23:34:59 node1 abrtd[273413]: Lock file '.lock' is locked by process 191242
Dec 19 23:35:00 node1 abrtd[273413]: Lock file '.lock' is locked by process 191242
Dec 19 23:35:00 node1 abrtd[273413]: Lock file '.lock' is locked by process 191242

Stop the abrt Service first.

# systemctl stop abrtd

Kill the Process holding the Lock File

# pkill -9 systemctl stop abrtd

Start the Service again

# systemctl start abrtd

The Lock File should go away.

# systemctl status abrtd
● abrtd.service - ABRT Automated Bug Reporting Tool
   Loaded: loaded (/usr/lib/systemd/system/abrtd.service; disabled; vendor preset: enabled)
   Active: active (running) since Mon 2022-12-19 23:48:02 +08; 4s ago
 Main PID: 334010 (abrtd)
   CGroup: /system.slice/abrtd.service
           └─334010 /usr/sbin/abrtd -d -s

Dec 19 23:48:02 hpc-gekko1 systemd[1]: Started ABRT Automated Bug Reporting Tool.
Dec 19 23:48:02 hpc-gekko1 systemd[1]: Starting ABRT Automated Bug Reporting Tool...
Dec 19 23:48:02 hpc-gekko1 abrtd[334010]: Init complete, entering main loop

GCCGO Error During GCC-10.4.0 Compilation on CentOS 7

If you encounter “gccgo: error: ../x86_64-pc-linux-gnu/libgo/libgotool.a: No such file or directory”

.....
.....
/home/user1/gcc-10.4.0/host-x86_64-pc-linux-gnu/gcc/gccgo -B/home/user1/gcc-10.4.0/host-x86_64-pc-linux-gnu/gcc/ -B/usr/x86_64-pc-linux-gnu/bin/ -B/usr/x86_64-pc-linux-gnu/lib/ -isystem /usr/x86_64-pc-linux-gnu/include -isystem /usr/x86_64-pc-linux-gnu/sys-include   -g -O2 -I ../x86_64-pc-linux-gnu/libgo -static-libstdc++ -static-libgcc  -L ../x86_64-pc-linux-gnu/libgo -L ../x86_64-pc-linux-gnu/libgo/.libs -o go ../.././gotools/../libgo/go/cmd/go/alldocs.go ../.././gotools/../libgo/go/cmd/go/go11.go ../.././gotools/../libgo/go/cmd/go/main.go ../x86_64-pc-linux-gnu/libgo/libgotool.a  
gccgo: error: ../x86_64-pc-linux-gnu/libgo/libgotool.a: No such file or directory
make[2]: *** [Makefile:821: go] Error 1
make[2]: Leaving directory '/home/user1/gcc-10.4.0/host-x86_64-pc-linux-gnu/gotools'
make[1]: *** [Makefile:14649: all-gotools] Error 2
make[1]: Leaving directory '/home/user1/gcc-10.4.0'
make: *** [Makefile:997: all] Error 2

The issue can be easily resolved by not building gcc in the same directory as the source code. At GCC Home

% ./contrib/download_prerequisites
% mkdir build
% ../configure --prefix=/usr/local/gcc-10.4.0 --disable-multilib --enable-languages=all
% make -j 8
% make install

abrtd daemon deleting recently created application core dumps on CentOS 7

When I did the command “systemctl status abrtd.service”, I’ve noticed the following

[root@node1 ~]# systemctl status abrtd.service
● abrtd.service - ABRT Automated Bug Reporting Tool
   Loaded: loaded (/usr/lib/systemd/system/abrtd.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2022-01-19 09:52:50 +08; 2 months 11 days ago
 Main PID: 1113 (abrtd)
   CGroup: /system.slice/abrtd.service
           └─1113 /usr/sbin/abrtd -d -s


Apr 01 22:29:43 node1 abrt-server[361048]: Package 'rapidfile' isn't signed with proper key
Apr 01 22:29:44 node1 abrt-server[361048]: 'post-create' on '/var/spool/abrt/ccpp-2022-04-01-22:29:29-360942' exited with 1
Apr 01 22:29:44 node1 abrt-server[361048]: Deleting problem directory '/var/spool/abrt/ccpp-2022-04-01-22:29:29-360942'
Apr 01 22:53:14 node1 abrt-server[423453]: Executable '/usr/local/intel/2019u5/intelpython3/bin/python3.6' doesn't belong to any package and ProcessUnp...et to 'no'
Apr 01 22:53:14 node1 abrt-server[423453]: 'post-create' on '/var/spool/abrt/ccpp-2022-04-01-22:52:40-420563' exited with 1
Apr 01 22:53:14 node1 abrt-server[423453]: Deleting problem directory '/var/spool/abrt/ccpp-2022-04-01-22:52:40-420563'
Apr 01 23:55:22 node1 abrt-server[432522]: '.' does not exist
Apr 01 23:55:23 node1 abrt-server[432522]: 'post-create' on '/var/spool/abrt/ccpp-2022-04-01-23:55:09-432449' exited with 1
Apr 01 23:55:23 node1 abrt-server[432522]: Deleting problem directory '/var/spool/abrt/ccpp-2022-04-01-23:55:09-432449'
Apr 01 23:55:23 node1 abrt-server[432522]: '/var/spool/abrt/ccpp-2022-04-01-23:55:09-432449' does not exist
Hint: Some lines were ellipsized, use -l to show in full.

The Issue seems to be caused by

  • The abrtd daemon deletes recently created core dumps
  • Error: Package isn’t signed with proper key

According to Red Hat Knowledge Base “Why does the abrtd daemon delete recently created application core dumps?”, the resolution can be simply

% vim /etc/abrt/abrt-action-save-package-data.conf 
OpenGPGCheck = no
ProcessUnpackaged = yes

Restart the abrtd daemon – as root – for the new settings to take effect:

# systemctl restart abrtd.service

According to Red Hat, the root cause is written as followed:
When the OpenGPGCheck variable is set to yes (the default setting), this informs ABRT to only analyse and handle crashes in applications provided by packages which are signed by the GPG keys whose locations are listed in the /etc/abrt/gpg_keys file. Setting OpenGPGCheck = no, tells ABRT to catch crashes in all programs. Also, abrt is configured to capture coredump of files installed from rpm only. Variable ‘ProcessUnpackaged’ tells abrt to keep the coredump even if application is not installed via rpm/yum.

References:

Install and Enable EPEL Repository for CentOS 7.x

The EPEL is an acronym for Extra Packages for Enterprise Linux. The EPEL repository used by the following Linux Distributions:

  • Red Hat Enterprise Linux (RHEL)
  • CentOS
  • Oracle Linux

On the Terminal,

Install EPEL Repository

# yum -y install epel-release

Refresh EPEL Repository

# yum repolist

Install Packages from EPEL Repository

# yum install -y htop

Search and install Package (E.g. htop)

# yum --disablerepo="*" --enablerepo="epel" list available | grep 'htop'

Error “Too many files open” on CentOS 7

If you are encountering Error messages during login with “Too many open files” and the session gets terminated automatically, it is because the open file limit for a user or system exceeds the default setting and  you may wish to change it

@ System Levels

To see the settings for maximum open files,

# cat /proc/sys/fs/file-max
55494980

This value means that the maximum number of files all processes running on the system can open. By default this number will automatically vary according to the amount of RAM in the system. As a rough guideline it will be about 100,000 files per GB of RAM.

To override the system wide maximum open files, as edit the /etc/sysctl.conf

# vim /etc/sysctl.conf
 fs.file-max = 80000000

Activate this change to the live system

# sysctl -p

@ User Level

To see the setting for maximum open files for a user

# su - user1
$ ulimit -n
1024

To change the setting, edit the /etc/security/limits.conf

$ vim /etc/security/limits.conf
user - nofile 2048

To change for all users

* - nofile 2048

This set the maximum open files for ALL users to 2048 files. These settings will require the user to relogin

References:

  1. How to correct the error “Too many files open” on Red Hat Enterprise Linux

Tools to Show your System Configuration

Tool 1: Display Information about CPU Architecture

[user1@node1 ~]$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6134 CPU @ 3.20GHz
Stepping: 4
CPU MHz: 3200.000
BogoMIPS: 6400.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 25344K
NUMA node0 CPU(s): 0-7,16-23
NUMA node1 CPU(s): 8-15,24-31
.....
.....

Tool 2: List all PCI devices

[user1@node1 ~]# lspci -t -vv
-+-[0000:d7]-+-00.0-[d8]--
| +-01.0-[d9]--
| +-02.0-[da]--
| +-03.0-[db]--
| +-05.0 Intel Corporation Device 2034
| +-05.2 Intel Corporation Sky Lake-E RAS Configuration Registers
| +-05.4 Intel Corporation Device 2036
| +-0e.0 Intel Corporation Device 2058
| +-0e.1 Intel Corporation Device 2059
| +-0f.0 Intel Corporation Device 2058
| +-0f.1 Intel Corporation Device 2059
| +-10.0 Intel Corporation Device 2058
| +-10.1 Intel Corporation Device 2059
| +-12.0 Intel Corporation Sky Lake-E M3KTI Registers
| +-12.1 Intel Corporation Sky Lake-E M3KTI Registers
| +-12.2 Intel Corporation Sky Lake-E M3KTI Registers
| +-12.4 Intel Corporation Sky Lake-E M3KTI Registers
| +-12.5 Intel Corporation Sky Lake-E M3KTI Registers
| +-15.0 Intel Corporation Sky Lake-E M2PCI Registers
| +-16.0 Intel Corporation Sky Lake-E M2PCI Registers
| +-16.4 Intel Corporation Sky Lake-E M2PCI Registers
| \-17.0 Intel Corporation Sky Lake-E M2PCI Registers
.....
.....

Tool 3: List all PCI devices

[user@node1 ~]# lsblk
sda 8:0 0 1.1T 0 disk
├─sda1 8:1 0 200M 0 part /boot/efi
├─sda2 8:2 0 1G 0 part /boot
└─sda3 8:3 0 1.1T 0 part
├─centos-root 253:0 0 50G 0 lvm /
├─centos-swap 253:1 0 4G 0 lvm [SWAP]
└─centos-home 253:2 0 1T 0 lvm

Tool 4: See flags kernel booted with

[user@node1 ~]
BOOT_IMAGE=/vmlinuz-3.10.0-693.el7.x86_64 root=/dev/mapper/centos-root ro crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet LANG=en_US.UTF-8

Tool 5: Display available network interfaces

[root@hpc-gekko1 ~]# ifconfig -a
eno1: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
ether XX:XX:XX:XX:XX:XX txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
.....
.....
.....

Tool 6 Using dmidecode to find hardware information
See Using dmidecode to find hardware information

How to configure NFS on CentOS 7

Step 1: Do a Yum Install

# yum install nfs-utils rpcbind

Step 2: Enable the Service at Boot Time

# systemctl enable nfs-server
# systemctl enable rpcbind
# systemctl enable nfs-lock     (it does not need to be enabled since rpc-statd.service  is static.)
# systemctl enable nfs-idmap    (it does not need to be enabled since nfs-idmapd.service is static.)

Step 3: Start the Services

# systemctl start rpcbind
# systemctl start nfs-server
# systemctl start nfs-lock
# systemctl start nfs-idmap

Step 4: Confirm the status of NFS

# systemctl status nfs

Step 5: Create a mount point

# mkdir /shared-data

Step 6: Exports the Share

# vim /etc/exports
/shared-data 192.168.0.0/16(rw,no_root_squash)

Step 7: Export the Share

# exportfs -rv

Step 8: Restart the NFS Services

# systemctl restart nfs-server

Step 9: Configure the Firewall

# firewall-cmd --add-service=nfs --zone=internal --permanent
# firewall-cmd --add-service=mountd --zone=internal --permanent
# firewall-cmd --add-service=rpc-bind --zone=internal --permanent

References:

  1. How to configure NFS in RHEL 7
  2. What firewalld services should be active on an NFS server in RHEL 7?

Configuring External libraries for R-Studio for CentOS 7

RStudio can be configured by adding entries to 2 configuration files. It may not exist by default and you may need to create

/etc/rstudio/rserver.conf
/etc/rstudio/rsession.conf

If you need to add additional libraries to the default LD_LIBRARY_PATH for R sessions. You have to add the parameters to the /etc/rstudio/rserver.conf

Step 1: Enter External Libraries settings
For example, you may want to add GCC-6.5 libraries

# Server Configuration File
rsession-ld-library-path=/usr/local/gcc-6.5.0/lib64:/usr/local/gcc-6.5.0/lib

Step 2a: Stop the R Server Services

# /usr/sbin/rstudio-server stop

Step 2b: Verify the Installation

# rstudio-server verify-installation

Step 2c: Start the R Server Services

# /usr/sbin/rstudio-server start
# /usr/sbin/rstudio-server status

(Make sure there is no error)

References:

  1. RStudio Server: Configuring the Server

 

Installing R-Studio on CentOS-7

Prerequisites

  • R-Studio (rstudio-server-rhel-1.3.1093-x86_64.rpm)
  • R-3.6.3
  • GNU-6.5
  • m4-1.4.18
  • gmp-6.1.0
  • mpfr-3.1.4
  • mpc-1.0.3
  • isl-0.18
  • gsl-2.1

Step 1: Download R-Studio v1.3.1093

On your terminal, Download the free version RStudio Server v1.3.1093 from RStudio.com

Step 2: Install R-Studio v1.3.1093

Follow the Install Process as written in https://rstudio.com/products/rstudio/download-server/redhat-centos/

% wget https://download2.rstudio.org/server/centos6/x86_64/rstudio-server-rhel-1.3.1093-x86_64.rpm
% yum install rstudio-server-rhel-1.3.1093-x86_64.rpm

Step 3: Verify Installation

% /usr/sbin/rstudio-server verify-installation

Step 4: Make sure the R is compiled and placed correctly

  • Make sure the R –enable-R-shlib flags
  • RStudio will search for your installation of R in the traditional locations, such as RStudio will search for your installation of R in the traditional locations, such as
    /usr/bin/R
    /usr/local/bin/R

    If R is not installed in these locations, RStudio may not be able to find it.

Step 5: Fixing Login Issues

If you managed to launch the R-Studio,

 

But not able to or having error. Do take a look at PAM authentication in RStudio Connect

Simply copy /etc/pam.d/login and replace the /etc/pam.d/rstudio

% cp /etc/pam.d/login /etc/pam.d/rstudio

 

Useful References:

  1. RStudio Server Will Not Start
  2. PAM authentication in RStudio Connect

How to increase the number of threads created by the NFS daemon for CENTOS 7

Taken from How to increase the number of threads created by the NFS daemon in RHEL 4, 5, 6 and 7?

In case of a NFS server with a high load, it may be advisable to increase the number of the threads created during the nfsd server start up.

Edit the following line in /etc/nfs.conf

% vim /etc/nfs.conf
#[nfsd]
# debug=0
threads=64
# host=
# port=0
# grace-time=90
# lease-time=90
# udp=y
# tcp=y

Testing whether it works….

% cat /proc/net/rpc/nfsd

According to the RH, “The first number is the total number of NFS server threads started. The second number indicates whether at any time all of the threads were running at once. The remaining numbers are a thread count time histogram.”

th 64 0 2.610 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000