Configuring CentOS 5 as an SMTP Mail Client with sendmail

The article is modified from Linux Configure Sendmail as SMTP Mail Client ( submission MTA )

In order to configure CentOS sendmail as a submission-only mail client, do follow the steps below. Sendmail can accept and send SMTP email requests from the local server. Outgoing MT always in a queue-only mode.

Configuring Sendmail in Queue-Only Mode

# vim /etc/sysconfig/sendmail

Modify the “DAEMON” line. Set DAEMON=no. This will make sendmail to be executed in a queue-only mode on the machine. The SMTP Server will sent but not receive mail requests.

DAEMON=no

Configure Mail Submission

Configure local server to use central MTA  to be the sender of your mail for your domain

vim /etc/mail/submit.cf
D{MTAHost}mailproxy.myLAN.com
# service sendmail restart

Test Mail

$ mail -s 'Test Message' mymail@mydomain.com < /dev/null

Installing Chelsio Unified Wire from RPM for CentOS 5

This writeup is taken from the ChelsioT4 UnifiedWire Linux UserGuide (pdf) and trimmed for installation on RHEL5.4 or CentOS 5.4. But it should apply for other RHEL / CentOS versions

Installing Chelsio Software

1. Download the tarball specific to your operating system and architecture from our Software download site http://service.chelsio.com/

2. For RHEL 5.4, untar using the following command:

# tar -zxvf ChelsioUwire-1.1.0.10-RHEL-5.4-x86_64.tar.gz

3. Navigate to “ChelsioUwire-x.x.x.x” directory. Run the following command

# ./install.sh

4. Select „1‟ to install all Chelsio modules built against inbox OFED or select „2‟ to install OFED-1.5.3 and all Chelsio modules built against OFED-1.5.3.

5. Reboot system for changes to take effect.

6. Configure the network interface at /etc/sysconfig/network-scripts/ifcfg-ethx

Compiling and Loading of iWARP (RDMA) driver

To use the iWARP functionality with Chelsio adapters, user needs to   install the iWARP drivers as well as the libcxgb4, libibverbs, and librdmacm   libraries. Chelsio provides the iWARP drivers and libcxgb4 library as part of the driver package. The other libraries are provided as part of the Open   Fabrics Enterprise Distribution (OFED) package.

# modprobe cxgb4
# modprobe iw_cxgb4
# modprobe rdma_ucm
# echo 1 >/sys/module/iw_cxgb4/parameters/peer2peer

Testing connectivity with ping and rping.

On the Server,

# rping -s -a server_ip_addr -p 9999

On the Client Server,

# rping -c –Vv -C10 -a server_ip_addr -p 9999

You should see ping data like this

ping data: rdma-ping-0: ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqr
ping data: rdma-ping-1: BCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrs 
ping data: rdma-ping-2: CDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrst 
ping data: rdma-ping-3: DEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstu 
ping data: rdma-ping-4: EFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuv 
ping data: rdma-ping-5: FGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvw 
ping data: rdma-ping-6: GHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwx 
ping data: rdma-ping-7: HIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxy 
ping data: rdma-ping-8: IJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz 
ping data: rdma-ping-9: JKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyzA 
client DISCONNECT EVENT...

Configuring VMDirectPath I/O pass-through devices on an ESX host with Chelsio T4 Card (Part 2)

Note for Installation of the VM:

Remember to add the PCI/PCIe Device to the VM. Upon adding, you should be able to see the “10:00.4  | Chelsio Communications Chelsio T4 10GB Ethernet”. See above Pix

Proceed with installation of the VM, you should be able to see the Ethernet settings. Do proceed with the installation of OFED and Chelsio Drivers.

Information:

  1. Configuring VMDirectPath I/O pass-through devices on an ESX host with Chelsio T4 Card (Part 1)

Configuring VMDirectPath I/O pass-through devices on an ESX host with Chelsio T4 Card (Part 1)

For Part 1, the article is taken from Configuring VMDirectPath I/O pass-through devices on an ESX host. Part (2), we will deal with Chelsio T4 Card configuration after the Passthrough has been configured.

1. Configuring pass-through devices

To configure pass-through devices on an ESX host:
  1. Select an ESX host from the Inventory of VMware vSphere Client.
  2. On the Configuration tab, click Advanced Settings. The Pass-through Configurationpage lists all available pass-through devices.Note:A green icon indicates that a device is enabled and active.An orange icon indicates that the state of the device has changed and the host must be rebooted before the device can be used.
  3. Click Edit.
  4. Select the devices and click OK.Note: If you have a chipset with VT-d, when you click Advanced Settings in vSphere Client, you can select what devices are dedicated to the VMDirectPath I/O.
  5. When the devices are selected, they are marked with an orange icon. Reboot for the change to take effect. After rebooting, the devices are marked with a green icon and are enabled.Note:The configuration changes are saved in the /etc/vmware/esx.conf file. The parent PCI bridge, and if two devices are under the same PCI bridge, only one entry is recorded.The PCI slot number where the device was connected is 00:0b:0. It is recorded as:/device/000:11.0/owner = “passthru”Note: 11 is the decimal equivalent of the hexadecimal 0b.

2. To configure a PCI device on a virtual machine:

  1. From the Inventory in vSphere Client, right-click the virtual machine and choose Edit Settings.
  2. Click the Hardware tab.
  3. Click Add.
  4. Choose the PCI Device.
  5. Click Next.Note: When the device is assigned, the virtual machine must have a memory reservation for the full configured memory size.

 

3. Information

  1. Configuring VMDirectPath I/O pass-through devices on an ESX host with Chelsio T4 Card (Part 2)

Installing packages for ALPS on CentOS 6

This tutorial is an extension of Installing ALPS 2.0 from source on CentOS 5 The installation can apply on CentOS 6 as well. For this tutorial, we will be installing.

  1. python 2.6 and python 2.6-devel (Assumed installed already)
  2. python-setuptools and python-setuptools-devel (Assumed installed already)
  3. blas and lapack
  4. numpy and numpy-f2py and python-matplotlib
  5. h5py,
  6. scipy

I’m trying to refrain for installing as much by source compiling and rely on repository for this tutorial. As such the packages will be behind

Step 1: Install blas and lapack packages from CentOS Base Repositories

# yum install lapack* blas*
================================================================================
 Package               Arch            Version              Repository     Size
================================================================================
Installing:
 blas                  x86_64          3.2.1-4.el6          base          321 k
 blas-devel            x86_64          3.2.1-4.el6          base          133 k
 lapack                x86_64          3.2.1-4.el6          base          4.3 M
 lapack-devel          x86_64          3.2.1-4.el6          base          4.5 M
Transaction Summary
================================================================================
Install       4 Package(s)

Total download size: 9.2 M
Installed size: 26 M
Is this ok [y/N]: y

Step 2: Install numpy numpy-f2py python-matplotlib

# yum install numpy numpy-f2py python-matplotlib
================================================================================
 Package                  Arch          Version               Repository   Size
================================================================================
Installing:
 numpy                    x86_64        1.3.0-6.2.el6         base        1.6 M
 numpy-f2py               x86_64        1.3.0-6.2.el6         base        430 k
 python-matplotlib        x86_64        0.99.1.2-1.el6        base        3.2 M

Transaction Summary
================================================================================
Install       3 Package(s)

Total download size: 5.3 M
Installed size: 22 M
Is this ok [y/N]: y

Step 3: Install h5py

# yum install h5py
================================================================================
 Package            Arch          Version                     Repository   Size
================================================================================
Installing:
 h5py               x86_64        1.3.1-6.el6                 epel        650 k
Installing for dependencies:
 hdf5-mpich2        x86_64        1.8.5.patch1-7.el6          epel        1.4 M
 liblzf             x86_64        3.6-2.el6                   epel         20 k
 mpich2             x86_64        1.2.1-2.3.el6               base        3.7 M

Transaction Summary
================================================================================
Install       4 Package(s)

Total download size: 5.7 M
Installed size: 17 M
Is this ok [y/N]: y

Step 4: Install scipy

# yum install scipy
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
scipy x86_64 0.7.2-5.el6 epel 5.8 M
Installing for dependencies:
suitesparse x86_64 3.4.0-2.el6 epel 782 k

Transaction Summary
================================================================================
Install 2 Package(s)

Total download size: 6.5 M
Installed size: 29 M
Is this ok [y/N]: y

Using iptables to allow compute nodes to access public network

Objectives:
Compute Nodes in an HPC environment are usually physically isolated from the public network and has to route through the gateway which are often found in Head Node in small or small-medium size cluster to access the internet or to access company LAN to access LDAP, you can use the iptables to route the traffic through the interconnect facing the internet

Scenario:
Traffic will be routed through the Head Node eth1 (internet facing) from the eth0 (private network)  of the same Head Node. The interconnect eth0 is attached to a switch where the compute nodes are similarly attached. Some

  1. 192.168.1.0/24 is the private network subnet
  2. 155.1.1.1 is the DNS forwarders for public-facing DNS
  3. 155.1.1.2 is the IP Address of the external-facing ethernet ie eth1

Ensure the machine allow ip forwarding

# cat /proc/sys/net/ipv4/ip_forward

If the output is 0, then IP forwarding is not enabled. If the output is 1, then IP forwarding is enabled.

If your output is 0, you can enabled it by running the command

# echo 1 > /proc/sys/net/ipv4/ip_forward

 Or if you wish to make it permanent,

# vim/etc/rc.local
echo 1 > /proc/sys/net/ipv4/ip_forward

 

 

Network Configuration of the Compute Node (Assuming that eth0 is connected to the private switch). It is very important that you input the gateway.

# Broadcom Corporation NetXtreme II BCM5708 Gigabit Ethernet
# Compute Node
DEVICE=eth0
BOOTPROTO=static
ONBOOT=yes
HWADDR=00:00:00:00:00:00
IPADDR=192.168.1.2
NETMASK=255.255.255.0
GATEWAY=192.168.1.1

DNS Settings of the Compute Nodes should not only have DNS of the internal private switch but also the DNS forwarders of the external network

search mydomain
# Private DNS
nameserver 192.168.1.1
# DNS forwarders
nameserver 155.1.1.1

Configure iptables in the Cluster Headnode if you are using the Headnode as a gateway.

# Using the Headnode as a gateway
iptables -t nat -A POSTROUTING -s 192.168.1.0/24 -o eth1 -j 
SNAT --to-source 155.1.1.1

# Accept all Traffic from a Private subnet
iptables -A INPUT -s 192.168.1.0/24 -d 192.168.1.0/24 -i 
eth0 -j ACCEPT

Restart iptables services

# service iptables save
# service iptables restart

Quick check that the Compute Nodes can have access to outside

# nslookup www.centos.org
Server: 155.1.1.1
Address: 155.69.1.1#53

Non-authoritative answer:
Name: www.centos.org
Address: 72.232.194.162