Install Windows 7 with VirtIO Drivers on RHEV 3.4

One of the best ways to improve the performance of Microsoft Windows guest is to use paravirtualised devices and drivers for KVM in the guests. This provides close to bare performance (up to 95%). These drivers are provided by the virtio-win RPM packages in the RHEL.

1. The virtio-win packages can be found on RHEL server /usr/share/virtio-win/

# ls -l /usr/share/virtio-win/
drwxr-xr-x. 4 root root      4096 Oct 13 10:24 drivers
drwxr-xr-x. 2 root root      4096 Oct 13 10:24 guest-agent
-rw-r--r--. 1 root root   2949120 May 29  2014 virtio-win-1.7.1_amd64.vfd
-rw-r--r--. 1 root root 149004288 May 29  2014 virtio-win-1.7.1.iso
-rw-r--r--. 1 root root   2949120 May 29  2014 virtio-win-1.7.1_x86.vfd
lrwxrwxrwx. 1 root root        26 Oct 13 10:24 virtio-win_amd64.vfd -> virtio-win-1.7.1_amd64.vfd
lrwxrwxrwx. 1 root root        20 Oct 13 10:24 virtio-win.iso -> virtio-win-1.7.1.iso
lrwxrwxrwx. 1 root root        24 Oct 13 10:24 virtio-win_x86.vfd -> virtio-win-1.7.1_x86.vfd

You may want to add these images to the ISO Library, you can attach the virtio-drivers-*.vfd image to the virtual floppy drier beofre manually installing Windows

2. The rhev-tools-setup.iso image found in the usr/share/rhev-guest-tools-iso

# ls -l /usr/share/rhev-guest-tools-iso
-rw-r--r--. 1 root root    177272 Jul 29 16:30 LICENSES
-rw-r--r--. 1 root root 350181376 Jul 29 16:30 RHEV-toolsSetup_3.4_9.iso
lrwxrwxrwx. 1 root root        57 Oct 13 10:23 rhev-tools-setup.iso -> /usr/share/rhev-guest-tools-iso/RHEV-toolsSetup_3.4_9.iso
-rw-r--r--. 1 root root       924 Jul 29 16:30 SOURCES

3. Upload drivers and tools to the ISO Storage Domain

# rhevm-iso-uploader -u admin@internal list 
Please provide the REST API password for the admin@internal oVirt Engine user (CTRL+D to abort):
ISO Storage Domain Name   | Datacenter                | ISO Domain Status
dmn_ixora_iso_vol         | RH_Resource               | active
# rhevm-iso-uploader -u admin@internal --iso-domain=dmn_ixora_iso_vol upload  /usr/share/virtio-win/virtio-win-1.7.1_x86.vfd /usr/share/rhev-guest-tools-iso/rhev-tools-setup.iso
Please provide the REST API password for the admin@internal oVirt Engine user (CTRL+D to abort):

4. Create the new VM using the RHEV Manager

Remember to click the “Run Once”
Under the “Boot Option”,
– Attached the virtio-win-1.7.1_amd.vfd to the floppy
– Attached the MS Windows 7 ISO to the CD

RHEV_Windows7_OS

5. Update Ethernet Drivers, PCI Simple Communication Controller by attaching the RH
RHEV_Windows7_OS_Drivers

6. Once you have booted to Windows 7, go to Device Manager and run an update for the missing Ethernet, storage controller

Windows7_DeviceManager

You should see
– Red Hat VirtIO SCSI controller,
– Red Hat VirtIO SCSI pass-through controller
– Red Hat VirtIO Ethernet Adapter

Cannot set user id: Resource temporarily unavailable while trying to login or su as a local user in CentOS

If you encounter this error while trying to login or su –login

# su --login user1
"cannot set user id: Resource temporarily unavailable" while trying to login or su as a local user in CentOS.

To resolve the issue,  extend the nproc value in /etc/security/limits.conf for the user.

.....
.....
user1       soft    nproc   10240
# End of file

Alternatively, you can edit /etc/security/limits.d/90-nproc.conf which is

# Default limit for number of user's processes to prevent
# accidental fork bombs.
# See rhbz #432903 for reasoning.

*          soft    nproc     1024
user1       soft    nproc    10240

In CentOS 6, this error occurs even if it has not set the limit explicitly because default configuration to all users is set to /etc/security/limits.d/90-nproc.conf. the reason for this error is that the the user’s the number of executing threads has reached the nproc resource limit.

Compiling and Configuring Python 3.4.1 on CentOS

Step 1: Remember to turn on RPMForge and EPEL Repository.

For more information on repository, see Repository of CentOS 6 and Scientific Linux 6 

Step 2: Download Python-3.4.1 from the Python Download Page

Step 3: Install Prerequisite Software

# yum install openssl-devel bzip2-devel expat-devel gdbm-devel readline-devel sqlite-devel

 Step 4: Configure and Build

# cd /installation_home/Python-3.4.1
# ./configure --prefix=/usr/local/python-3.4.1
# make
# make install

Step 5: Check that scripts query the correct interpreter:

#/usr/local/python-3.4.1/bin/python3

Step 6: Run setup.py from the Installation Directory of Python

# python setup.py install

Step 7: Install Python Modules (whatever you need. Here is an example)
You can use pip install to install packages using pip3. See Using pip to install python packages

# /usr/local/python-3.4.1/bin/pip3 install networkx

Checking for Constant Time Stamp Counter

A Constant Time Stamp Counter is included in more recent Intel Processors (TSC) to reads at the processor’s maximum rate regardless of the actual CPU running rate. While this makes time keeping more consistent, but it can skew benchmarks, where a certain amount of spin-up time is spent at a lower clock rate before the OS switches the processor to the higher rate. For more information on Time Stamp Counter, do look at Time Stamp Counter (wikipedia)

To check whether your CPU support TSC, you can issue the command

# grep -q constant_tsc /proc/cpuinfo && echo "Constant TSC detected" || echo "No Constant TSC Detected"

Creation of Logical Networks on RHEV

Logical Networks allow segregation of different types of traffic. According to Red Hat Training Manual, it is recommended to have 3 types of network

  1. Management Network – Network to connect hypervisor (RHEV-H) nodes to the RHEV Manager
  2. Storage Network – Network to connect RHEV-H  to NFS and iSCSI. If you are using FC, there is no need to create a separate network since it is using SAN
  3. Public Network –  Connect the Gateway Router, the RHEV-M systems and RHEV-H Nodes

 

Configure Logical Networks

Data Centre > Networks > New Loginal Networks

NewLogicalNetwork

 

Configure Network Mac Address Pool

1. View Current Settings

# rhevm-config -g MacPoolRanges
MacPoolRanges: 00:1a:4a:9f:7e:00-00:1a:4a:9f:7e:ff version: general

2. Set the Mac range to 52:54:00:56:00:01 – 52:54:00:56:00:FF

# rhevm-config -s MacPoolRanges=52:54:00:56:00:01-52:54:00:56:00:FF

3. Restart ovirt-engine engine

# service ovirt-engine restart

Able to ping IPoIB for selected existing nodes when adding new nodes

When I add in new nodes, install the MLNX_OFED drivers from Mellanox. Strangely I was only able to randomly ping to selected existing or new nodes on the Cluster. This was quite a curious problem.

When I do a ibstat, but when you do a ibping test (Installing Voltaire QDR Infiniband Drivers for CentOS 5.4), the test will failed for selected nodes in the cluster, but others will be able to ping back.

Yes, both openibd and opensmd services are started for all nodes on the cluster,

After some troubleshooting, the only way was to stop all the opensmd service for all the nodes (existing and new) and restart it again

# service openibd restart
# service opensmd restart