Installing ClamAV on Rocky Linux 8

Do read up on What is ClamAV by Liquid Web for more information on Clam AV.

I thought I would list a few pointers that might be of use.

  • ClamAV is a free and open-source antivirus software and a cross-platform antivirus toolkit.
  • For Linux Systems, it offers Real-Time Protection, which is a crucial feature against zero-day attacks
  • ClamAV provides a multi-threaded virtual scanner, a tool for automatic virus database updates, and a command-line scanner.

a. Install ClamdAV and its services which include Antivirus and Antivirus Updater

# dnf install clamav clamd clamav-update

b. Setting up a Service Account

If you’re planning to run freshclam or clamd as a service on a Linux or Unix system, you should create a service account. The following instructions assume that you will use the an account named “clamav” for both services, although you may create a different account name for each if you wish.

# groupadd clamav
# useradd -g clamav -s /bin/false -c "Clam Antivirus" clamav

c. Configure SELINUX for ClamAV

# sudo setsebool -P antivirus_can_scan_system 1

d. Run ClamAV Database Update Command

# freshclam

e. Suggested configuration of /etc/clamd/scan.conf or /etc/clamd/clamd.conf as written by ClamAV Setup Notes

ExtendedDetectionInfo yes
FixStaleSocket yes
LocalSocket /var/run/clamav/clamd.ctl
LogFile /var/log/clamav/clamav.log
LogFileMaxSize 5M
LogRotate yes
LogTime yes
MaxDirectoryRecursion 15
MaxThreads 20
OnAccessExcludeUname clamav
OnAccessExcludeUname root
OnAccessIncludePath /home
OnAccessMountPath /home/johnfedoruk
OnAccessPrevention yes
User root
VirusEvent /etc/clamav/detected.sh

f. Create and Edit the systems freshclam.service

vim /usr/lib/systemd/system/freshclam.service
[Unit]
Description = ClamAV Scanner
After = network.target

[Service]
Type = forking
#if you want to update database automatically more than once a day change the number 1  
ExecStart = /usr/bin/freshclam -d -c 1
Restart = on-failure
PrivateTmp =true

[Install]
WantedBy=multi-user.target

g. Start and Enable FreshClam and Calmd Scanner Services

# systemctl start freshclam
# systemctl enable freshclam

h. Scanning a Directory

# clamscan -r /tmp

References:

  1. Installing ClamAV
  2. ClamAV Setup Notes
  3. Install ClamAV Antivirus on Rocky Linux 8 or Alma Linux 8

Issues when Installing Dockers on Rocky Linux 8.10

I was installing dockers on Rocky Linux 8.10. These were my steps:

dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
dnf install docker-ce docker-ce-cli containerd.io

I immediately got this error…..

Error: 
 Problem 1: problem with installed package podman-4:4.9.4-1.module+el8.10.0+1815+5fe7415e.x86_64
  - package podman-4:4.9.4-1.module+el8.10.0+1815+5fe7415e.x86_64 from @System requires runc >= 1.0.0-57, but none of the providers can be installed
  - package podman-4:4.9.4-1.module+el8.10.0+1815+5fe7415e.x86_64 from appstream requires runc >= 1.0.0-57, but none of the providers can be installed
  - package podman-4:4.9.4-1.module+el8.10.0+1825+623b0c20.x86_64 from appstream requires runc >= 1.0.0-57, but none of the providers can be installed
  - package podman-4:4.9.4-12.module+el8.10.0+1843+6892ab28.x86_64 from appstream requires runc >= 1.0.0-57, but none of the providers can be installed
  - package podman-4:4.9.4-13.module+el8.10.0+1871+e6fa1069.x86_64 from appstream requires runc >= 1.0.0-57, but none of the providers can be installed
  - package podman-4:4.9.4-13.module+el8.10.0+1874+ce489889.x86_64 from appstream requires runc >= 1.0.0-57, but none of the providers can be installed

To resolve the issues, do add the --allowerasing flag,

dnf install docker-ce docker-ce-cli containerd.io --allowerasing
================================================================================
 Package                   Arch   Version                Repository        Size
================================================================================
Installing:
 containerd.io             x86_64 1.6.32-3.1.el8         docker-ce-stable  35 M
     replacing  runc.x86_64 1:1.1.12-1.module+el8.10.0+1815+5fe7415e
 docker-ce                 x86_64 3:26.1.3-1.el8         docker-ce-stable  27 M
 docker-ce-cli             x86_64 1:26.1.3-1.el8         docker-ce-stable 7.8 M
Installing dependencies:
 libcgroup                 x86_64 0.41-19.el8            baseos            69 k
Installing weak dependencies:
 docker-buildx-plugin      x86_64 0.14.0-1.el8           docker-ce-stable  14 M
 docker-ce-rootless-extras x86_64 26.1.3-1.el8           docker-ce-stable 5.0 M
 docker-compose-plugin     x86_64 2.27.0-1.el8           docker-ce-stable  13 M
Removing dependent packages:
 buildah                   x86_64 1:1.34.0-1.module+el8.10.0+1815+5fe7415e
                                                         @AppStream        31 M
 cockpit-podman            noarch 84.1-1.module+el8.10.0+1815+5fe7415e
                                                         @AppStream       682 k
 containers-common         x86_64 2:1-81.module+el8.10.0+1815+5fe7415e
                                                         @AppStream       580 k
 podman                    x86_64 4:4.9.4-1.module+el8.10.0+1815+5fe7415e
                                                         @AppStream        52 M
 podman-catatonit          x86_64 4:4.9.4-1.module+el8.10.0+1815+5fe7415e
                                                         @AppStream       794 k

Transaction Summary
================================================================================
Install  7 Packages
Remove   5 Packages

Total download size: 102 M
Is this ok [y/N]: y

References:

Unable to run hydra_bstrap_proxy when using mpiexec

If you are facing an issue similar to this error and the reasons provided are:

  1. Host is unavailable. Please check that all hosts are available.
  2. Cannot launch hydra_bstrap_proxy or it crashed on one of the hosts. Make sure hydra_bstrap_proxy is available on all hosts and it has right permissions.
  3. Firewall refused connection. Check that enough ports are allowed in the firewall and specify them with the I_MPI_PORT_RANGE variable.
  4. pbs bootstrap cannot launch processes on remote host. You may try using -bootstrap option to select alternative launcher.
[mpiexec@hpc-node1] check_exit_codes (../../../../../src/pm/i_hydra/libhydra/demux/hydra_demux_poll.c:117): unable to run bstrap_proxy on hpc-npriv-g001 (pid 2778558, exit code 256)
[mpiexec@hpc-node1] poll_for_event (../../../../../src/pm/i_hydra/libhydra/demux/hydra_demux_poll.c:159): check exit codes error
[mpiexec@hpc-node1] HYD_dmx_poll_wait_for_proxy_event (../../../../../src/pm/i_hydra/libhydra/demux/hydra_demux_poll.c:212): poll for event error
[mpiexec@hpc-node1] HYD_bstrap_setup (../../../../../src/pm/i_hydra/libhydra/bstrap/src/intel/i_hydra_bstrap.c:1065): error waiting for event
[mpiexec@hpc-node1] HYD_print_bstrap_setup_error_message (../../../../../src/pm/i_hydra/mpiexec/intel/i_mpiexec.c:1027): error setting up the bootstrap proxies
[mpiexec@hpc-node1] Possible reasons:
[mpiexec@hpc-node1] 1. Host is unavailable. Please check that all hosts are available.
[mpiexec@hpc-node1] 2. Cannot launch hydra_bstrap_proxy or it crashed on one of the hosts. Make sure hydra_bstrap_proxy is available on all hosts and it has right permissions.
[mpiexec@hpc-node1] 3. Firewall refused connection. Check that enough ports are allowed in the firewall and specify them with the I_MPI_PORT_RANGE variable.
[mpiexec@hpc-node1] 4. pbs bootstrap cannot launch processes on remote host. You may try using -bootstrap option to select alternative launcher.

The Solution is found by modifying your mpiexec commands

$ mpiexec -bootstrap ssh ......

For example

$ mpiexec -bootstrap ssh python3 python.text

Alternatively, you can put the line in your .bashrc or PBS Script

export I_MPI_HYDRA_BOOTSTRAP=ssh

References:

Installing Octopus-15.0.0 with OpenMPI on Rocky Linux 8

This is an update to the blog entry Basic Configuration of Octopus 5.0.0 with OpenMPI on CentOS 6

Prerequisites:

  • GNU Compilers – 12.3
  • OpenMPI – 4.1.5
  • FFTW – 3.3.10
  • LAPACK/BLAS – (Comes with Rocky Linux 8)
  • GSL – 2.7.1

To install Octopus using autoconf, you will need to dnf install the autoconf, automaker, autogen packages

dnf install autoconf automake autogen

Preparing the Configure file using Autoreconf tools

After downloading from https://octopus-code.org/documentation/15/releases/ and unzip and untar, you must prepare the environment to generate the configure file. Do take a look at INSTALL and README files.

autoreconf --install

Prepare the PATH and LD_LIBRARY_PATH Environment

If you are using Module Environment, it will be much easier, if not, you have to configure $PATH and $LD_LIBRARY_PATH

export PATH=$PATH:/usr/local/openmpi-4.1.5/bin:...........
export LD_LIBRARY_PATH: $LD_LIBRARY_PATH: /usr/local/openmpi-4.1.5/lib...................

export FC=mpif90
export CC=mpicc
export FCFLAGS="-O3"
export CFLAGS="-O3"

Prepare the Octopus Setup Environment

./configure 
--prefix=/usr/local/octopus-15.0.0  \
--with-libxc-prefix=/usr/local/libxc-6.2.2 \
--with-libxc-include=/usr/local/libxc-6.2.2/include \
--with-gsl-prefix=/usr/local/gsl-2.7.1 \
--with-blas=/usr/lib64/libblas.a \ 
--with-arpack=/usr/lib64/libarpack.so.2 \ 
--with-fft-lib="-L/usr/local/fftw-3.3.10/lib" \
--disable-zdotc-test \
--enable-single \
--enable-mpi
make -j 16
make install

Disabling ipv6 on Rocky Linux 8 with Ansible

If you wish to disable ipv6 on Rocky Linux 8, there is a wonderful writeup on the script found at https://github.com/juju4/ansible-ipv6/blob/main/tasks/ipv6-disable.yml which you may find useful. If you just need to disable it temporarily without disruption (assuming you have not been using ipv6 at all)

- name: Disable IPv6 with sysctl
  ansible.posix.sysctl:
    name: "{{ item }}"
    value: "1"
    state: "present"
    reload: "yes"
  with_items:
    - net.ipv6.conf.all.disable_ipv6
    - net.ipv6.conf.default.disable_ipv6
    - net.ipv6.conf.lo.disable_ipv6

If you can tolerate a bit of disruption, you may want to take a look at putting it at the network configuration and restarting it

- name: RedHat | disable ipv6 in sysconfig/network
  ansible.builtin.lineinfile:
    dest: /etc/sysconfig/network
    regexp: "^{{ item.regexp }}"
    line: "{{ item.line }}"
    mode: '0644'
    backup: true
    create: true
  with_items:
    - { regexp: 'NETWORKING_IPV6=.*', line: 'NETWORKING_IPV6=NO' }
    - { regexp: 'IPV6INIT=.*', line: 'IPV6INIT=no' }
  notify:
    - Restart network
    - Restart NetworkManager
  when: ansible_os_family == 'RedHat'

Using Ansible to get Flexlm License Information and copy to Shared File Environment

You can use Ansible to extract Flexlm information from a remote license server, which is stored in a central place where you can display the information.

I use crontab to extract the information every 15 min and place it in a central place so that users can check the license availability.

- name: Extract Information from ANSYS Lic Server and extract to file
  block:
    - name: Get FlexLM License Info
      ansible.builtin.shell: "/usr/local/ansys_inc/shared_files/licensing/linx64/lmutil lmstat -c ../license_files/ansyslmd.lic -a"
      register: lmstat_output

    - name: Save FlexLM License Output to File on ANSYS Lic Server
      copy:
        content: "{{ lmstat_output.stdout }}"
        dest: "/var/log/ansible_logs/ansys_lmstat.log"

    - name: Get FlexLM Output from Remote Server
      fetch:
        src: "/var/log/ansible_logs/ansys_lmstat.log"
        dest: "/usr/local/lic_lmstat_log/ansys_lmstat.log"
        flat: yes

The fetch command is useful for fetching files from remote machines and storing them locally in a file tree. For more information, do take a look at Fetch files from remote nodes

At crontab, I fetch the file every 15min

*/15 * * * * /root/ansible_cluster/run_lmstat_licsvr.sh

The run_lmstat_licsvr.sh is simply to call the ansible playbook to run the ansible script above.

SSL connection error For Delinea MFA with DirectControl 

Muco of the Troubleshooting comes from the Knowledgebase Article is derived from KB-8958: MFA with DirectControl fails with SSL connection error and Preparing a Linux Client Server for Centrify and 2FA for CentOS-7

Problem :

When attempting to log in with a user that requires MFA the following error is presented:

$ ssh user@192.168.0.1
SSL Connection Error

Cause:

The error is likely due to a certificate problem. A required certificate may be missing or the permission may not be set correctly

How to check:

# /usr/share/centrifydc/bin/adcdiag
VERSION   : Verify that DirectControl version supports MFA               : Pass
JOINSTATE : Verify that DirectControl is in connected mode               : Pass
ZONECHK   : Verify that MFA is supported in the zone                     : Pass
SSHDCFG   : Verify that SSHD enables ChallengeResponseAuthentication     : Warning
          : Cannot read sshd configuration file. Probably you are not
          : using Delinea openssh. SSH login for MFA users will fail if
          : option ChallengeResponseAuthentication is not set to yes.
          : Please check and ensure ChallengeResponseAuthentication is
          : set to yes in sshd configuration file.
CDCCFG    : Verify that MFA options in centrifydc.conf are correct       : Pass
PROXYCFG  : Verify that HTTP proxy configuration is set properly         : Pass
CLDINST   : Verify that trusted Identity Platform instance is specified  : Pass
          : Successfully connected to Identity Platform and certificate
          : has been verified OK.
CNTRCFG   : Verify that Connectors are configured correctly              : Pass
CURCNTR   : Verify that DirectControl has selected a workable Connector  : Pass
CLOUDROLE : Verify that this machine has permissions to perform Identity
          : Platform authentication                                      : Pass
......
......
......

Check the Logs at /var/centrify/tmp…. You may notice some errors like

.....
.....
ERROR:
Not a trusted connector or no valid connector certificate installed locally.
SUGGESTIONS:
1. Verify that the IWA root CA certificate is installed in the system. Please refer to KB-7393 on how to configure the root CA certificate in the system.
2. Please collect connector log if you need Delinea support.
.....
.....

Resolution:

Check whether the Certificates have been added at

  •  /etc/pki/ca-trust/source/anchors/ 
  • /var/centrify/net/certs

Check the SSH Settings at

# vim /etc/ssh/sshd_config
# To disable tunneled clear text passwords, change to no here!
#PasswordAuthentication yes
#PermitEmptyPasswords no
PasswordAuthentication no


# Change to no to disable s/key passwords
#ChallengeResponseAuthentication yes
ChallengeResponseAuthentication yes

Restart the SSHD Services

# systemctl restart sshd.service

Restart the Centrifydc services

# /usr/share/centrifydc/bin/centrifydc restart

Active Directory Flush and Reload

# adflush -f
# adreload

Troubleshooting Intel VMD Driver Boot Issue on Supermicro Server with Rocky Linux 8.7

I was installing Rocky Linux 8.7 on a Supermicro Server with Intel VirtualRAID. I could not boot to Rocky Linux 8.7, the Install Screen could not be presented. Instead, there are repeated errors like the one below on the screen.

“DMAR: [INTR-REMAP] Request device [bc:00.5] fault index 0x8000 [fault reason 0x25] Blocked a compatibility format interrupt request”

The Issue was explained in the Article from Intel “Unable to Boot RHEL* 8.7/9.0 if Intel® VMD Is Enabled for Intel® Virtual RAID on CPU (Intel® VROC) RAID Management

Resolution
A problem with the inbox Intel®️ VMD driver included in RHEL 8.7 and 9.0 was identified, and it is necessary to add the boot parameter intremap=off to the kernel command line while installing the operating system. This will prevent the operating system from encountering any problems.

This particular issue has been fixed via a kernel update and has been implemented in RHEL 9.1.

it is necessary to add the boot parameter intremap=off to the kernel command line while installing the operating system

I tried Rocky Linux 8.9 and the issue was fixed.

Optimizing Ansible Performance: Serial Execution

By default, Ansible parallelises tasks on multiple hosts simultaneously and speeds up automation in large inventories. But sometimes, this is not ideal in a load-balanced environment, where upgrading the servers simultaneously may cause the loss of services. How do we use Ansible to run the updates at different times? I use the keyword “serial” before executing the roles universal package.

- hosts: standalone_nodes
  become: yes
  serial: 1 
  roles:
        - linux_workstation

Alternatively, you can use percentages to indicate how many will upgrade at one time.

- hosts: standalone_nodes
  become: yes
  serial: 25%
  roles:
        - linux_workstation

References:

  1. How to implement parallelism and rolling updates in Ansible