Understanding basic nmcli in Rocky Linux 9

In Rocky Linux 9, the nmcli command-line tool (Network Manager Command Line) replaces the traditional ifcfg files that we have been using since Rocky Linux 8. If you can Google “Why nmcli is replacing the ifcfg”, you will find a comprehensive list of key reasons why the transition took place. One thing that I like best is this particular answer

nmcli commands are designed to be easily automated and scripted (e.g., using Ansible), offering better control and error checking (syntax validation) compared to generating flat text files through scripts.

Usage 1a: List the NetworkManager connection profiles

# nmcli con
NAME   UUID                                  TYPE      DEVICE 
ens33  xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx  ethernet  ens33  
lo     yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy  loopback  lo 

Usage 1b: List the Network Devices and their status

# nmcli dev
DEVICE  TYPE      STATE                   CONNECTION 
ens33   ethernet  connected               ens33      
lo      loopback  connected (externally)  lo        

Usage 2a: Disable the connection of ens33

# nmcli con down ens33
Connection 'ens33' successfully deactivated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/2)

Usage 2b: Enable the connection of ens33

# nmcli con up ens33
Connection 'ens33' successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/2)

Usage 2c: Show Connection Details

# nmcli con show ens33
[root@hpc-wfly-rl9 ~]# nmcli con show ens33
connection.id:                          ens33
connection.uuid:                        817c4ac5-49f4-3752-9a16-9d7460bed1c9
connection.stable-id:                   --
connection.type:                        802-3-ethernet
connection.interface-name:              ens33
connection.autoconnect:                 yes
connection.autoconnect-priority:        -999
connection.autoconnect-retries:         -1 (default)
connection.multi-connect:               0 (default)
connection.auth-retries:                -1
connection.timestamp:                   1763952141
connection.permissions:                 --
connection.zone:                        --
connection.controller:                  --
connection.master:                      --
connection.slave-type:                  --
connection.port-type:                   --
connection.autoconnect-slaves:          -1 (default)
connection.autoconnect-ports:           -1 (default)
connection.down-on-poweroff:            -1 (default)
connection.secondaries:                 --
connection.gateway-ping-timeout:        0
connection.ip-ping-timeout:             0
connection.ip-ping-addresses:           --
connection.ip-ping-addresses-require-all:-1 (default)
connection.metered:                     unknown
connection.lldp:                        default
.....
.....

Usage 3: Set the static IP Address of the Ethernet Connection

# nmcli con mod ens33 ipv4.method manual ipv4.address 10.10.1.2/24 ipv4.gateway 10.10.1.1
# nmcli con up ens33

Usage 4a: Using conn to update DNS (replace manual scripting of /etc/resolv.conf)

# nmcli con mod ens33 ipv4.dns '8.8.8.8,8.8.8.4' 
# nmcli con show |grep dns
# nmcli con up ens33

At /etc/resolv.conf, you will notice

# Generated by NetworkManager
search myown.domain.com
nameserver 8.8.8.8
nameserver 8.8.8.4

Usage 4b: Using nmcli to update domain search (replace manual scripting of /etc/resolv.conf)

# nmcli con mod ens33 ipv4.dns-search 'myown.domain.com'
# nmcli con up ens33

Usage 5a: Disable IPv6

# nmcli con mod ens33 ipv6.method "disabled"
# nmcli con up ens33
.....
....
ipv6.method:                            disabled
ipv6.dns:                               --
ipv6.dns-search:                        --
ipv6.dns-options:                       --
ipv6.dns-priority:                      0
ipv6.addresses:                         --
....
.....

Display the IP settings of the device. If there is no inet6 entry is displayed, IPv6 is disabled on the device.

# ip address show ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
altname enp2s1
inet 192.168.x.x/19 brd 192.168.x.x scope global noprefixroute ens33
    valid_lft forever preferred_lft forever

References:

driverExceptions.EnvrionmentFileReadError on ABAQUS 2025 Hotfix4 foir Rocky Linux 8

If you encounter the Issue when you type “abaqus cae”

driverExceptions.EnvironmentFileReadError: /...../......../abaqus_v6.env
File "SMAPyaModules/SMAPyaDriverPy.m/src/driverUtilsCae.py", line 44, in executeOnCaeGraphicsStartup
File "SMAPyaModules/SMAPyaDriverPy.m/src/driverUtilsCae.py", line 28, in callStartupMethod
File "SMAPylModules/SMAPylDriverPy.m/src/driverEnv.py", line 878, in read
File "SMAPylModules/SMAPylDriverPy.m/src/driverEnv.py", line 770, in _updateEnvFromFile
File "SMAPylModules/SMAPylDriverPy.m/src/driverEnv.py", line 672, in _readEnvironmentFile
File "SMAPylModules/SMAPylDriverPy.m/src/driverEnv.py", line 391, in envRunFile

Abaqus Error: Abaqus/CAE Kernel exited with an error.

The Solution is super easy. Just do the following:

export LANG=en_US.UTF-8
abaqus cae

Could not resolve host: mirrorlist.centos.org for CentOS-7

Issues:

If you need to install something for your EOL CentOS-7. For example to install CentOS_lm_sensors

Could not retrieve mirrorlist http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=os&infra=stock error was
14: curl#6 - "Could not resolve host: mirrorlist.centos.org; Unknown error"
http://mirror.centos.org/centos/7/os/x86_64/repodata/repomd.xml: [Errno 14] curl#6 - "Could not resolve host: mirror.centos.org; Unknown error"
Trying other mirror.

Solutions:

Create a directory and save the original repo files before editing

# cd /etc/yum.repos.d
# mkdir original_repos
# copy -v *.repo original_repos

You will need to update the repofiles to point to vault.centos.org

sed -i s/mirror.centos.org/vault.centos.org/g /etc/yum.repos.d/*.repo
sed -i s/^#.*baseurl=http/baseurl=http/g /etc/yum.repos.d/*.repo
sed -i s/^mirrorlist=http/#mirrorlist=http/g /etc/yum.repos.d/*.repo
sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-*
sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/CentOS-*

Clean the old cache

# yum clean all

Let’s try installing lm_sensors using yum.

# yum install lm_sensors
# yum install lm_sensors-sensord
# sensors-detect
# watch -d sensors
# service sensord start

Installing ClamAV on Rocky Linux 8

Do read up on What is ClamAV by Liquid Web for more information on Clam AV.

I thought I would list a few pointers that might be of use.

  • ClamAV is a free and open-source antivirus software and a cross-platform antivirus toolkit.
  • For Linux Systems, it offers Real-Time Protection, which is a crucial feature against zero-day attacks
  • ClamAV provides a multi-threaded virtual scanner, a tool for automatic virus database updates, and a command-line scanner.

a. Install ClamdAV and its services which include Antivirus and Antivirus Updater

# dnf install clamav clamd clamav-update

b. Setting up a Service Account

If you’re planning to run freshclam or clamd as a service on a Linux or Unix system, you should create a service account. The following instructions assume that you will use the an account named “clamav” for both services, although you may create a different account name for each if you wish.

# groupadd clamav
# useradd -g clamav -s /bin/false -c "Clam Antivirus" clamav

c. Configure SELINUX for ClamAV

# sudo setsebool -P antivirus_can_scan_system 1

d. Run ClamAV Database Update Command

# freshclam

e. Suggested configuration of /etc/clamd/scan.conf or /etc/clamd/clamd.conf as written by ClamAV Setup Notes

ExtendedDetectionInfo yes
FixStaleSocket yes
LocalSocket /var/run/clamav/clamd.ctl
LogFile /var/log/clamav/clamav.log
LogFileMaxSize 5M
LogRotate yes
LogTime yes
MaxDirectoryRecursion 15
MaxThreads 20
OnAccessExcludeUname clamav
OnAccessExcludeUname root
OnAccessIncludePath /home
OnAccessMountPath /home/johnfedoruk
OnAccessPrevention yes
User root
VirusEvent /etc/clamav/detected.sh

f. Create and Edit the systems freshclam.service

vim /usr/lib/systemd/system/freshclam.service
[Unit]
Description = ClamAV Scanner
After = network.target

[Service]
Type = forking
#if you want to update database automatically more than once a day change the number 1  
ExecStart = /usr/bin/freshclam -d -c 1
Restart = on-failure
PrivateTmp =true

[Install]
WantedBy=multi-user.target

g. Start and Enable FreshClam and Calmd Scanner Services

# systemctl start freshclam
# systemctl enable freshclam

h. Scanning a Directory

# clamscan -r /tmp

References:

  1. Installing ClamAV
  2. ClamAV Setup Notes
  3. Install ClamAV Antivirus on Rocky Linux 8 or Alma Linux 8

Issues when Installing Dockers on Rocky Linux 8.10

I was installing dockers on Rocky Linux 8.10. These were my steps:

dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
dnf install docker-ce docker-ce-cli containerd.io

I immediately got this error…..

Error: 
 Problem 1: problem with installed package podman-4:4.9.4-1.module+el8.10.0+1815+5fe7415e.x86_64
  - package podman-4:4.9.4-1.module+el8.10.0+1815+5fe7415e.x86_64 from @System requires runc >= 1.0.0-57, but none of the providers can be installed
  - package podman-4:4.9.4-1.module+el8.10.0+1815+5fe7415e.x86_64 from appstream requires runc >= 1.0.0-57, but none of the providers can be installed
  - package podman-4:4.9.4-1.module+el8.10.0+1825+623b0c20.x86_64 from appstream requires runc >= 1.0.0-57, but none of the providers can be installed
  - package podman-4:4.9.4-12.module+el8.10.0+1843+6892ab28.x86_64 from appstream requires runc >= 1.0.0-57, but none of the providers can be installed
  - package podman-4:4.9.4-13.module+el8.10.0+1871+e6fa1069.x86_64 from appstream requires runc >= 1.0.0-57, but none of the providers can be installed
  - package podman-4:4.9.4-13.module+el8.10.0+1874+ce489889.x86_64 from appstream requires runc >= 1.0.0-57, but none of the providers can be installed

To resolve the issues, do add the --allowerasing flag,

dnf install docker-ce docker-ce-cli containerd.io --allowerasing
================================================================================
 Package                   Arch   Version                Repository        Size
================================================================================
Installing:
 containerd.io             x86_64 1.6.32-3.1.el8         docker-ce-stable  35 M
     replacing  runc.x86_64 1:1.1.12-1.module+el8.10.0+1815+5fe7415e
 docker-ce                 x86_64 3:26.1.3-1.el8         docker-ce-stable  27 M
 docker-ce-cli             x86_64 1:26.1.3-1.el8         docker-ce-stable 7.8 M
Installing dependencies:
 libcgroup                 x86_64 0.41-19.el8            baseos            69 k
Installing weak dependencies:
 docker-buildx-plugin      x86_64 0.14.0-1.el8           docker-ce-stable  14 M
 docker-ce-rootless-extras x86_64 26.1.3-1.el8           docker-ce-stable 5.0 M
 docker-compose-plugin     x86_64 2.27.0-1.el8           docker-ce-stable  13 M
Removing dependent packages:
 buildah                   x86_64 1:1.34.0-1.module+el8.10.0+1815+5fe7415e
                                                         @AppStream        31 M
 cockpit-podman            noarch 84.1-1.module+el8.10.0+1815+5fe7415e
                                                         @AppStream       682 k
 containers-common         x86_64 2:1-81.module+el8.10.0+1815+5fe7415e
                                                         @AppStream       580 k
 podman                    x86_64 4:4.9.4-1.module+el8.10.0+1815+5fe7415e
                                                         @AppStream        52 M
 podman-catatonit          x86_64 4:4.9.4-1.module+el8.10.0+1815+5fe7415e
                                                         @AppStream       794 k

Transaction Summary
================================================================================
Install  7 Packages
Remove   5 Packages

Total download size: 102 M
Is this ok [y/N]: y

References:

Unable to run hydra_bstrap_proxy when using mpiexec

If you are facing an issue similar to this error and the reasons provided are:

  1. Host is unavailable. Please check that all hosts are available.
  2. Cannot launch hydra_bstrap_proxy or it crashed on one of the hosts. Make sure hydra_bstrap_proxy is available on all hosts and it has right permissions.
  3. Firewall refused connection. Check that enough ports are allowed in the firewall and specify them with the I_MPI_PORT_RANGE variable.
  4. pbs bootstrap cannot launch processes on remote host. You may try using -bootstrap option to select alternative launcher.
[mpiexec@hpc-node1] check_exit_codes (../../../../../src/pm/i_hydra/libhydra/demux/hydra_demux_poll.c:117): unable to run bstrap_proxy on hpc-npriv-g001 (pid 2778558, exit code 256)
[mpiexec@hpc-node1] poll_for_event (../../../../../src/pm/i_hydra/libhydra/demux/hydra_demux_poll.c:159): check exit codes error
[mpiexec@hpc-node1] HYD_dmx_poll_wait_for_proxy_event (../../../../../src/pm/i_hydra/libhydra/demux/hydra_demux_poll.c:212): poll for event error
[mpiexec@hpc-node1] HYD_bstrap_setup (../../../../../src/pm/i_hydra/libhydra/bstrap/src/intel/i_hydra_bstrap.c:1065): error waiting for event
[mpiexec@hpc-node1] HYD_print_bstrap_setup_error_message (../../../../../src/pm/i_hydra/mpiexec/intel/i_mpiexec.c:1027): error setting up the bootstrap proxies
[mpiexec@hpc-node1] Possible reasons:
[mpiexec@hpc-node1] 1. Host is unavailable. Please check that all hosts are available.
[mpiexec@hpc-node1] 2. Cannot launch hydra_bstrap_proxy or it crashed on one of the hosts. Make sure hydra_bstrap_proxy is available on all hosts and it has right permissions.
[mpiexec@hpc-node1] 3. Firewall refused connection. Check that enough ports are allowed in the firewall and specify them with the I_MPI_PORT_RANGE variable.
[mpiexec@hpc-node1] 4. pbs bootstrap cannot launch processes on remote host. You may try using -bootstrap option to select alternative launcher.

The Solution is found by modifying your mpiexec commands

$ mpiexec -bootstrap ssh ......

For example

$ mpiexec -bootstrap ssh python3 python.text

Alternatively, you can put the line in your .bashrc or PBS Script

export I_MPI_HYDRA_BOOTSTRAP=ssh

References:

Issues installing using RHEL8 repo

The Forum found here helped with me RHEL8 repo. I was using Rocky Linux 8.10

dnf install cuda
Last metadata expiration check: 2:34:35 ago on Tue 14 Jan 2025 08:28:15 AM +08.
Error: 
 Problem: package cuda-12.6.3-1.x86_64 from cuda-rhel8-x86_64 requires nvidia-open >= 560.35.05, but none of the providers can be installed
  - cannot install the best candidate for the job
  - package nvidia-open-3:560.28.03-1.noarch from cuda-rhel8-x86_64 is filtered out by modular filtering
  - package nvidia-open-3:560.35.03-1.noarch from cuda-rhel8-x86_64 is filtered out by modular filtering
  - package nvidia-open-3:560.35.05-1.el8.noarch from cuda-rhel8-x86_64 is filtered out by modular filtering
  - package nvidia-open-3:565.57.01-1.el8.noarch from cuda-rhel8-x86_64 is filtered out by modular filtering
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)

In the forum, there was a workaround solution

Go to the download page and select the V100 driver.

dnf install. ./nvidia..........x86_64.rpm

Remove the old cuda if you have it installed and reset the repo module streams.

dnf remove cuda-toolkit nvidia-driver-cuda
dnf module reset nvidia-driver

Install dkms and cuda

dnf module install nvidia-driver:latest-dkms 
dnf install cuda-toolkit nvidia-driver-cuda

Alternatively, this method works especially if you have not manually install the drivers or manually uninstall it already.

dnf install cuda-toolkit nvidia-driver-cuda